Accelerating AI Inference: Untether AI's At-Memory Computing

Untether AI delivers cutting-edge AI inference accelerator ICs and cards, offering unparalleled speed, accuracy, and energy efficiency. Unique at-memory computing architecture enables cost-effective AI deployments from the cloud to the edge

Company website
At-Memory Computing:
Unique architecture.
20 TOPs/W:
Industry-leading energy efficiency.
Push-Button Inference:
Ease of use.
2 PetaOps INT8:
High performance metric.

Revolutionizing AI with At-Memory Computing

Fast, Accurate, and Energy-Efficient AI Inference

- Industry-leading performance and energy efficiency (showcase key metrics like TOPS/W)
- Proven in silicon, with second-generation products now available.
- MLPerf benchmark results.
- 20 TOPs/W & 2 PetaOps of INT8 performance.

Applications Across Industries

- Vision applications, autonomous vehicles, AgTech, financial services, government sector, and more.
- Cloud to edge deployments.

  • - Compute elements directly adjacent to memory cells for unrivaled compute density.

  • - Significantly reduces power consumption.

  • - Increases throughput and efficiency.

Contact us

Neurometric provides engineering consulting services for AI hardware. We can work with CPUs, GPUs, and have unique and rare expertise in many of the new and novel AI hardware chips. If you are looking to benchmark your model against various types of compute, need help designing a new AI chip into your device, or want help implementing AI hardware in a new project, please fill out this form, or give us a call.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.