자료실

[논문/SCIE] A Novel CNFET SRAM-Based Compute-In-Memory for BNN Consideri…

페이지 정보

profile_image
작성자 관리자
댓글 0건 조회 2회 작성일 25-08-08 15:40

본문

* 성과기관 : 한국전자기술연구원

* 학술지명 : Electronics(MDPI)


* Abstract *

As AI models grow in complexity to enhance accuracy, supporting hardware encounters challenges such as heightened power consumption and diminished processing speed due to high throughput demands. Compute-in-memory (CIM) technology emerges as a promising solution. Furthermore, carbon nanotube field-effect transistors (CNFETs) show significant potential in bolstering CIM technology. Despite advancements in silicon semiconductor technology, CNFETs pose as formidable competitors, offering advantages in reliability, performance, and power efficiency. This is particularly pertinent given the ongoing challenges posed by the reduction in silicon feature size. We proposed an ultra-low-power architecture leveraging CNFETs for Binary Neural Networks (BNNs), featuring an advanced state-of-the-art 8T SRAM bit cell and CNFET model to optimize performance in intricate AI computations. Through meticulous optimization, we fine-tune the CNFET model by adjusting tube counts and chiral vectors, as well as optimizing transistor ratios for SRAM transistors and nanotube diameters. SPICE simulation in 32 nm CNFET technology facilitates the determination of optimal transistor ratios and chiral vectors across various nanotube diameters under a 0.9 V supply voltage. Comparative analysis with conventional FinFET-based CIM structures underscores the superior performance of our CNFET SRAM-based CIM design, boasting a 99% reduction in power consumption and a 91.2% decrease in delay compared to state-of-the-art designs.


전문은 아래 링크를 통해 확인 가능합니다.

* 구글드라이브 : https://drive.google.com/file/d/1VQdSa-mn24Eb78UbDiTB9rOf-SmfPnsW/view?usp=sharing

Total 12건 1 페이지

검색