Building a Proof of Concept That Expands the Potential of GelSight’s Technology
GelSight, an MIT-founded startup specializing in tactile sensing technology, wanted to explore the feasibility of using smartphones to generate 3D models through touch-based imaging. The goal was to build a proof-of-concept (POC) iOS app that connects with the GelSight Mini Sensor to capture depth maps of objects and process data.
Developing a Tactile Sensing POC with AI-Powered Depth Mapping
GelSight’s technology captures the microscopic details of surfaces by pressing a soft elastomer against an object and converting the deformations into a high-resolution 3D model. Our task was to build an iOS application that connects to the GelSight Mini Sensor via Bluetooth, captures images, processes them through a neural network, and visualizes depth data.
The app was designed for compatibility with the iPhone 14 Pro Max, leveraging its macro camera for detailed imaging. AI-driven depth mapping was powered by PyTorch Lite. While dedicated GelSight hardware provides highly refined results, adapting the technology for an iOS environment required careful optimization.





“We had a tight timeline and high expectations — Nomtek delivered. The app became a key part of our sales pitch and helped us open important conversations with partners and investors. What impressed me most was how quickly they understood our goals and adapted the design to match exactly what we needed for live demos.”
Mark Reimer, Co-Founder at Hamelin Tech
Scope of Work
This project called for a specialized approach, balancing hardware integration, AI processing, and real-time 3D visualization. Key aspects of our work included:
iPhone Application with Bluetooth Sensor Integration
We built an iOS app that connects to the GelSight Mini Sensor, facilitating real-time data capture and processing. The setup allows for quick scanning and easy adaptation for various industries.
AI-Powered Depth Mapping with PyTorch Lite
We integrated the pre-trained model from GelSight into the iOS application and optimized its performance. We refined the post-processing of neural network outputs, where we leveraged Apple’s Accelerate framework (specifically, image and signal processing components). Our team also compared Apple Accelerate’s performance against OpenCV to achieve the best balance of speed and accuracy on iOS devices.
Custom 3D Visualization in SceneKit
To achieve the desired visual fidelity and performance, we customized the rendering pipeline in SceneKit to work with depth maps for accurate surface visualization.
Performance Optimization and Native Development
The integration of PyTorch Lite with iOS involved working with C++ and native mobile languages, including manual memory management to improve efficiency and processing speed.
Scope of Work
This project called for a specialized approach, balancing hardware integration, AI processing, and real-time 3D visualization. Key aspects of our work included:
iPhone Application with Bluetooth Sensor Integration
We built an iOS app that connects to the GelSight Mini Sensor, facilitating real-time data capture and processing. The setup allows for quick scanning and easy adaptation for various industries.
AI-Powered Depth Mapping with PyTorch Lite
We integrated the pre-trained model from GelSight into the iOS application and optimized its performance. We refined the post-processing of neural network outputs, where we leveraged Apple’s Accelerate framework (specifically, image and signal processing components). Our team also compared Apple Accelerate’s performance against OpenCV to achieve the best balance of speed and accuracy on iOS devices.
Custom 3D Visualization in SceneKit
To achieve the desired visual fidelity and performance, we customized the rendering pipeline in SceneKit to work with depth maps for accurate surface visualization.
Performance Optimization and Native Development
The integration of PyTorch Lite with iOS involved working with C++ and native mobile languages, including manual memory management to improve efficiency and processing speed.
Solution
Smart Adaptation of Tactile Imaging for Mobile
Our work proved that GelSight’s technology could be adapted for smartphone use, offering a compact and cost-effective alternative to specialized scanning devices.
By processing images on the phone itself, the system reduced reliance on external computing power, making tactile sensing more accessible. The introduction of a mobile app opens the doors for future applications across industries like dermatology, forensics, and robotics.
While further training of the AI model is needed to refine accuracy, the POC validated the core concept and set the foundation for expanding GelSight’s applications into industries requiring high-detail surface analysis.
By quickly absorbing domain knowledge and applying our technical expertise, we acted not just as developers but as a strategic partner—advising on improvements and delivering a working prototype that expands the reach of GelSight’s technology.
Key takeaways:
- A functional iOS application that integrates with GelSight’s hardware
- A validated workflow for AI-powered depth mapping using smartphone cameras
- A scalable framework for expanding into industries such as dermatology, forensic analysis, and robotics
Team Composition
The team consisted of tightly aligned cross-functional specialists that explored the feasibility of the POC in surface imagining.
iOS Developer
Designer
Python Developer
QA Engineer


.png)

%201.png)


