DeGirum® ORCA™ is flexible, efficient, and affordable AI accelerator IC. ORCA™ provides application developers the ability to create rich, sophisticated, and highly functional products at the power and price suitable for the edge.
ORCA’s ability to process pruned models essentially multiplies the compute and bandwidth resources, allowing the processing of larger, more accurate models to enable real-time cloud-like quality applications at the edge.
Having DRAM access support in our AI accelerator offers significant advantages for users. With the ability to access DRAM directly, our AI accelerator can provide faster data transfer rates, which translates into improved performance and reduced latency. In addition to providing faster data transfer rates, DRAM access support in our AI accelerator also allows for quick and seamless switching of neural network (NN) models. With this capability, our customers can easily switch between different NN models without the need for time-consuming data transfers, reducing downtime and increasing productivity. This feature is particularly valuable for applications that require frequent model changes, such as image or speech recognition, where different models may be needed to handle varying data sets or specific tasks. By enabling rapid model switching directly from DRAM, our AI accelerator provides users with greater flexibility and efficiency in their AI workflows.
At our company, we take great pride in our AI accelerators flexible architecture, which supports both int8 and float32 precision formats. This versatility enables our customers to choose the best format for their specific use case, allowing them to optimize performance, accuracy, and power consumption based on their unique requirements.In addition, our AI accelerator also features intelligent power management capabilities, which allow it to dynamically allocate resources based on the current workload. By utilizing only the necessary resources, our AI accelerator can minimize power consumption while still delivering optimal performance, making it an environmentally friendly and cost-effective solution.With its flexible architecture and power management capabilities, our AI accelerator is able to provide a scalable and efficient platform for a wide range of AI applications. Whether you are working with large-scale data sets or require real-time processing, our AI accelerator is equipped to handle your most demanding tasks, delivering exceptional performance, accuracy, and energy efficiency.
Our AI accelerator has a highly scalable architecture, making it an ideal solution for organizations of all sizes. Adding another board with the chip to the system results in a linear increase in performance, providing a seamless upgrade path as your needs evolve over time. This scalability allows organizations to easily and cost-effectively expand their AI infrastructure, while also enabling easy integration with existing systems. In response to customer demand, we can develop boards with several of our chips, leveraging our chip-to-chip interface for even greater scalability and performance. This feature enables organizations to build larger and more complex AI systems with ease, while still maintaining the linear performance scaling that our AI accelerator is known for. With this flexibility, our customers can create customized solutions that meet their specific needs and take full advantage of the power and performance of our AI accelerator.