Cortica emerged out of a research project at Israel’s esteemed Technion Institute that led to a series of breakthrough discoveries into how the mammal cortex deciphers visual environments. The three researchers that would become Cortica’s founders combined their expertise in Neuroscience, Electrical Engineering, and Computer Science to hack into live rat brain tissue and “encode” digital images. Mimicking these natural biological processes is the core of Cortica’s autonomous AI.
A DIFFERENT KIND OF TECH COMPANY
Since beginning in 2007, Cortica has become the leader of AI technology for autonomous platforms. The company’s autonomous AI is based on proprietary brain research and utilizes unsupervised-learning technology backed by more than 200 patents. Cortica understands the visual world on a human level – thereby significantly exceeding the capabilities of the industry.
With headquarters in Tel Aviv and offices in NYC Cortica has 100 employees including leading AI researchers and veterans of elite Israeli military intelligence units.
This autonomous AI is embedded in next-generation, ultra-scale platforms where understanding images is a critical task. Cortica provides the intelligence that enables autonomous vehicles, smart cities, and more.
Cortica continues to fundamentally revolutionize the way machines perceive and interpret visual information. With global offices in Israel and New York, and backed by significant funding from strategic partners, Cortica has developed autonomous AI with revolutionary unsupervised, self-learning technology. Cortica’s patent-protected computer engine meets the demanding requirements for real-time, large scale and zero-error tolerance computer vision required for progressive projects.
INTELLIGENT SOLUTIONS FOR MODERN CITIES
Cortica is empowering cameras and UAVs with true visual understanding to create the city of tomorrow -- protecting lives and improving infrastructure with real-time, flexible, unsupervised AI.
THE BRAINS BEHIND THE EYES
City-wide security systems produce millions of hours of footage that is impossible to monitor manually. This visual big data provides tremendous insights.
Cortica's unsupervised AI combs through these hours of video in real-time to identify patterns, concepts and alert situations–driving improvements in safety, traffic management, and much more
Intuitive search unlocks the true power of the platform. Searching by text or image puts crucial insights at the operator's' fingertips.
Cortica provides industry leading facial detection and recognition with unmatched accuracy. The technology automatically organizes and groups faces to provide instant feedback.
Situational and contextual behavior analysis understands actions and responses to appropriately alert operators during instances of necessary heightened awareness. Specific types of behavior can be analyzed and understood in real-time.
SMART DRONE AND AERIAL SOLUTIONS
With less than 10% of the footage being watched so much key data is lost–data that can provide critical information about the health of our cities. Cortica's unsupervised AI analyzes and interprets this big data to make infrastructure maintenance more effective and efficient.
UNDERSTANDING THE WORLD AROUND THE VEHICLE
Cortica’s revolutionary automotive visual intelligence platform is built on the foundation of a mature, patented, self-learning technology. The robust signature based representation and bottom-up, fine-grain, unsupervised learning capabilities enable a more detailed, comprehensive and precise interpretation of the car’s surroundings. The lightweight and efficient computational framework fortfies autonomous vehicles with the power of Autonomous AI. Cortica's AI operates at the core of four product lines addressing the intricacies and complexities of driving with vehicles that are entirely autonomous.
The Cortica platform garners a deep understanding of the car’s environment by immediately recognizing generic and granular classes of objects to the level of full scene reconstruction and prediction.The technology recognizes 10,000+ fine grain concepts.
THE SYSTEM RECOGNIZES
Vehicles & Trucks | Bicycles & Motorcycles | Pedestrians | Complex Contextual States | Motion States | And thousands more
With fine grain recognition the system identifies everything from pedestrians with baby strollers, to hoverboards, to individuals walking while looking at their smartphone. The robust capabilities support all concepts and tangible objects.
Beyond sensory perception Cortica's Autonomous AI interprets complex contextual states with an added layer of predictive AI. This allows the system to place probabilities upon an object’s next possible course of action while simultaneously predicting additional objects likely to enter the frame. This deep understanding is key for both policy and planning.
MAPPING AND LOCALIZATION
For a vehicle to position itself accurately it requires a comprehensive and up-to-date visual map of its surroundings. Cortica’s Autonomous AI generates and continually updates a highly detailed map at scale, enabling the car to position itself in space with absolute precision in all driving conditions and scenarios.
Cortica’s Autonomous AI maps visual features to high-dimensional, linear signatures that are a portable and lightweight representational format.
SIGNATURE BASED TECHNOLOGY
Cortica’s technology can use any visual cue as a landmark, beyond a closed set of objects. This creates a true ‘use - anywhere’ solution that reinforces the localized precision.
Mapping information can be collected from any camera equip
The platform is constantly identifying changes in the environment by comparing existing and new signatures; keeping the map accurate up to the minute.
Cortica's light and universal signature files update the database and can be instantly shared among vehicles to ensure delivery of up-to-date driving information.
This solves in one sweep, the scalability, robustness and update limitations of existing solutions
Cortica's signature is able to fuse multiple sensor inputs into a single representation space. The fused space leverages the expressive benefits of any added sensor without the limitations of a secondary rule-based fusion
Fusing multiple data sources into a single representation space provides a more robust and full understanding. Utilizing multiple sensors allows the car to handle situations where even a human would have tremendous difficulty- such as torrential downpour or extremely heavy fog. In these extreme circumstances radar, lidar, and audio can provide an added layer of safety.
Barclays estimates that a single autonomous car can generate as much as 100GB of data every second. Applying this to the entire US fleet equates to 5.8 Billion terabytes of raw data per hour.
Gaining visibility into this massive data to discover true insights is the only way to teach an AI to drive. To define and validate policy, big data is required–most notably for the long tail, blind spot behaviors.
The unsupervised AI is able to comb through the tremendous amounts of existing automotive data to detect patterns and cluster data–allowing for searchable functionality and insight analysis. Operators are able to search by text, image, video, or signature
The Cortica Big Data platform is engineered from the ground up using proprietary signature and CortexTMtechnology at the base to provide ultra-scale visual data storage and retrieval.
Big data and machine learning functionality allows for:
Search by image/frame
Data clustering and organization
Text to image/video search
Signatures stored are:
Sublinear Database growth
Cortica’s technology recognizes concepts in all conditions, regardless of lighting, weather, obstructions, lack of lane markings or any other situation that could arise.
POWERFUL AND LIGHTWEIGHT
The generic technology applies simpler and lighter common computational resources to autonomous driving components. The system operates with extremely low power consumption.
The platform is compatible with existing hardware and does not require additional retrofitted components.
PORTABLE AND UNIVERSAL
The lightweight signature files preserve raw scene information for constant updates and learning. These signatures are shared among vehicles and to the concept database for constant updates.
A manual image annotation process, as employed by other solutions, is not scalable for the big data generated by self-driving vehicles. Cortica’s lightweight, unsupervised approach of bottom-up learning from large scale databases is not limited by the increasing long-tail of edge cases.
Cortica’s architecture utilizes an unannotated array of images and video to learn key concepts from the data and develop contextual, situational understanding. This generic background process allows for all concept types to be covered with no manual training.