EdgeCortix Acquires Multiple Patents for Dynamic Neural Accelerator® AI Processor Technology
/cloudfront-us-east-1.images.arcpublishing.com/gray/QM5UETEICTC2BRWCJUWLUNYVYM.jpg)
TOKYO, Oct. 4, 2021 / PRNewswire / – EdgeCortix Inc. (Tokyo, Japan; CEO: Sakyasingha Dasgupta), the company that enables efficient processing of AI at the edge with near-cloud performance, today announced that it has obtained four patents on its reconfigurable artificial intelligence-specific processor technology.
âThe four patents acquired in Japan and United States are fundamental technologies behind our Dynamic neural accelerator Hardware Architecture (DNA), a high energy efficiency, low latency AI accelerator IP designed specifically for machine learning on the device. DNA in combination with a proprietary software stack is the technology behind EdgeCortix’s first co-processor chip for AI inference. These patents further differentiate our DNA technology from its competitors and are important additions to our existing portfolio of patents on hardware processor and compiler technologies., ” vscommented Sakyasingha Dasgupta, CEO of EdgeCortix group companies and one of the inventors of the acquired patents.
JP Patent No. 6834097
Date of patent grant: 8e February 2021
Title of the invention: Neural Network Accelerator Hardware Specific Inference Division
US patent application. No. 17/186 003 (allowed)
Date of notification: 18e August 2021
Title of the invention: Dividing Neural Network Accelerator Hardware Specific Inference into Layer Groups
Assignee: EdgeCortix Pte. Ltd.
Summary of the invention: This invention covers the generation of instructions for performing inference by a hardware system, such as an ASIC or FPGA, capable of performing efficient neural network inference by grouping neural network layers together and avoiding accesses to it. external memory between processes to reduce the total number of memory accesses compared to processing layers one by one and storing all intermediate data in external memory. This can allow flexibility in managing various neural networks with performance and energy efficiency close to a fixed neural network chip, and flexibility to manage a variety of neural networks, such as convolutional neural networks. , including MobileNet variations. Such techniques can be beneficial under conditions where an entire input layer cannot fit in limited on-chip memory. By reducing access to external memory, performance stochasticity can also be reduced.
U.S. Patent No. 11,144,822
Date of patent grant: 12e October 2021
Title of the invention: Reconfigurability at runtime of the Neural Network Accelerator
JP patent application. N ° 2021-079197 (allowed)
Date of notification: 8e September 2021
Title of the invention: Reconfigurability at runtime of the Neural Network Accelerator
Assignee: EdgeCortix Pte. Ltd.
Summary of the invention: This invention covers devices for performing neural network inference, such as an accelerator, which include a novel “reduction interconnect”, between its compute modules and its on-chip memory to accumulate the outputs of the compute modules on the fly. , thus avoiding additional reading from and writing to on-chip memory. The reduction interconnect is reconfigurable to establish connections between compute modules, bypassing on-chip memory, in a manner that results in efficient inference during the execution of entire tasks or parts of such tasks. For example, in an accelerator for the inference of any deep neural network, the reduction interconnection can allow, for each computation module, to choose between direct memory access or access via an adder circuit. auxiliary. The freedom to select connectivity can allow an accelerator to compute multiple input channel tiles or kernel pixels in parallel, with multiple compute modules running fully synchronously.
For more information:
EdgeCortix press room
+81 3-6417-9661
+1 415-818-0430
[email protected]
The EdgeCortix group companies have an extensive portfolio of patents or patent applications covering all key products and creating shareholder value by giving EdgeCortix both the freedom to operate and meaningful product differentiation.
About EdgeCortix
BordCortix, founded in 2019, is a leading provider of artificial intelligence hardware acceleration solutions, specifically designed for advanced computing scenarios. The company’s revolutionary new Dynamic Neural Accelerator (DNA) architecture is a reconfigurable, scalable, and energy-efficient AI processor design. DNA, combined with the company’s proprietary software, enables easy deployment of high energy efficiency, low latency neural network models to custom ASICs or FPGAs. The Company provides software, AI processor hardware and IPs that meet the high performance and low latency requirements of advanced driver assistance systems, autonomous robots, financial technology, manufacturing, smart city and other advanced vision systems.
© 2021 EdgeCortix, Inc. All rights reserved worldwide. EdgeCortix, the EdgeCortix logo, and Dynamic Neural Accelerator are trademarks or registered trademarks of EdgeCortix, Inc. (or its affiliates) in the United States and / or elsewhere. EdgeCortix Inc. is a wholly owned subsidiary of EdgeCortix Pte. Ltd.
View original content to download multimedia:
SOURCE Edgecortix, Inc.