OPENEDGES AI Accelerator (NPU) & Memory Subsystem IP licensed for Eyenix AI-powered surveillance camera chipset

October 13, 2020 — OPENEDGES Technology, Inc., the world’s leading supplier of AI computing IP solutions announced today that their AI computing IP solution including AI accelerator (NPU) and memory subsystem consisting of Network-On-Chip and DDR memory controllers, has been licensed to Eyenix Co.,Ltd. for their next generation AI powered IP camera chipset.

Eyenix Co.,Ltd. is a leading fabless company specializing in high-definition video signal processing and intelligent vision system solutions. Eyenix EN675 is a highly integrated SoC for smart video surveillance, automotive imaging equipment, broadcasting medical device & others. EN675 has combined powerful computer vision performance, excellent quality image processing and the high performance memory system to enable next generation smart IP camera solutions.

OPENEDGES’ AI computing IP solution provides the following: AI accelerator (ENLIGHTTM NPU) combined with ORBITTM memory subsystem comprising Network-on-Chip Interconnect (OICTM) and a DDR Memory Controller (OMCTM). ENLIGHTTM NPU is a highly scalable processor IP for computer vision and artificial intelligence. It supports all popular deep learning frameworks. In general inference chips require considerable memory bandwidth, more so than other applications. The tight integration of OPENEDGES’ AI NPU and memory subsystem enables achievement of super high bandwidth efficiency.

“With OPENEDGES AI Computing Platform IP, we have successfully released EN675, the AI powered image signal processing SoC. It enables high performance edge computing with ultra low power. OPENEDGES NPU & memory subsystem are highly optimized and helped us to achieve high quality video and advanced video analytics. Together with our embedded image signal processor(ISP), EN675 will be deployed for a wide range of computer vision applications.” said JungHyun Hwang, Ph.D, CEO of Eyenix Co.,Ltd.

OPENEDGES is the only IP company that provides synergy by combining our AI NPU with memory subsystem IP. With the addition of the highly optimized memory system, customers can drive OPENEDGES’ NPU to the highest TOPS/W performance. New intelligent edge systems appear every day. They need to process input data with low latency and high throughput while staying under tight power budgets. This is achievable only through the tight integration and optimization of OPENEDGES’ AI NPU and memory subsystem

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s