NTT Corporation has unveiled a powerful new AI inference chip that delivers real-time 4K video processing at 30 frames per second—using under 20 watts of power. This marks a major leap forward for edge computing, where devices process data locally instead of relying on distant cloud servers.
Revealed at NTT’s Upgrade 2025 summit in San Francisco, the chip is built for edge devices like drones, smart cameras, and sensors. It eliminates the need to transmit ultra-high-definition video to the cloud, slashing latency and boosting privacy.
Edge AI vs Cloud: Why Local Processing Is the Future
Traditionally, AI systems sent data from devices to remote cloud servers for processing. While the cloud offers massive computing power, it introduces delays—often too long for real-time tasks like drone navigation or surveillance.
NTT’s chip flips that model. It processes data directly on the device, enabling instant insights. This approach reduces lag, saves bandwidth, and works even without stable internet. For use cases demanding speed, such as emergency response or security, edge AI is a game-changer.
Drones and Devices Get a Real-Time Upgrade
With NTT’s chip installed, drones can now detect objects or people from up to 150 meters away—the legal limit for flight altitude in Japan. That’s a 5x range improvement compared to older real-time AI systems capped at just 30 meters.
The chip unlocks new capabilities in:
- Remote infrastructure inspections
- Emergency response in disaster zones
- Smart farming with wide-field crop monitoring
- Offline surveillance in bandwidth-limited areas
All of this runs on under 20 watts—far less than traditional GPU-powered AI servers that can’t operate efficiently on mobile or battery-powered systems.
What Powers the Chip: NTT’s AI Inference Engine
At the heart of this chip is NTT’s custom-built inference engine. It uses advanced optimization techniques to deliver powerful AI with minimal energy use:
- Interframe Correlation: Compares video frames to reduce repetitive calculations
- Dynamic Bit-Precision Control: Adjusts numerical accuracy on the fly, conserving energy
- Native YOLOv3 Execution: Directly runs one of the fastest object detection models in AI
Together, these features make high-speed, high-accuracy video analysis possible in places where traditional solutions can’t go.
From Lab to Market: Commercial Rollout and IOWN Vision
NTT plans to commercialize the chip in fiscal year 2025 through its company, NTT Innovative Devices Corporation.
The chip is also central to NTT’s IOWN (Innovative Optical and Wireless Network) initiative. Within IOWN’s Data-Centric Infrastructure, the chip will pair with the All-Photonics Network to enable ultra-low latency communication.
Beyond performance, NTT is teaming up with NTT DATA to integrate Attribute-Based Encryption (ABE) into the chip’s ecosystem. This will allow sensitive AI applications—like healthcare or smart cities—to benefit from both real-time processing and secure, fine-grained access control.
Pushing the Boundaries of Edge AI
NTT’s AI chip isn’t just a technical milestone—it’s a strategic move toward a more intelligent, sustainable digital society. As a global tech powerhouse with $92 billion in annual revenue and a presence in 190 countries, NTT is positioning itself at the forefront of edge AI innovation.
This chip enables:
- Drones to fly smarter, farther, and faster
- Cameras to analyze events without cloud dependency
- Critical systems to process data securely and instantly
In short, NTT is turning the edge into the new frontier for AI—where real-time action meets real-world needs.