Nexa.ai has introduced a new AI search agent called “Hyperlink,” designed to run entirely on local hardware. The application is built for Nvidia RTX AI PCs and functions as an on-device assistant capable of turning personal data into structured insights. By connecting information from notes, slides, PDFs, and images, the tool aims to organize and interpret user files effortlessly.
Nvidia outlined that the key differentiator for Hyperlink is its ability to process queries locally rather than sending them to remote servers. This ensures that all scanned files and user data remain private on the device, preventing personal or confidential information from leaving the computer. This approach is intended to appeal to professionals who manage sensitive data but still require the performance benefits of generative AI.
Performance benchmarks reported by TechRadar indicate significant speed improvements. When tested on an RTX 5090 system, Hyperlink delivered up to three times faster indexing and double the large language model (LLM) inference speed compared to previous builds. Nvidia claims that its optimization of retrieval-augmented generation (RAG) allows the tool to process dense data folders much more quickly; for example, a 1GB collection of data that previously took nearly 15 minutes to index can now be processed in approximately 5 minutes.
Beyond raw speed, Hyperlink moves away from static keyword matching. It utilizes the reasoning capabilities of LLMs to interpret user intent, allowing it to locate relevant materials even if file names are obscure or unrelated to their content. The system can also connect related ideas across multiple documents to provide structured answers with clear references.





