The eye is our primary sensory organ, providing the primary channel for perceiving the world around us. Daily tasks become difficult for people with visual impairments, as they rely on their senses of touch, hearing, smell and taste to navigate their environment. Modern technology has provided smart tools and applications to assist the visually impaired. This project draws its inspiration from how the human eye, along with brain cells, analyzes images to identify objects. The system uses a Raspberry Pi with a built-in camera, the Picamera2 library, to capture real-time images during execution. The system is based on the SSD MobileNet V3 algorithm, which applies deep learning through convolutional neural networks (CNNs) for object recognition. The COCO dataset serves as the basis for training the model, demonstrating superior performance in object recognition across different categories. The OpenCV library was also used to enhance detection accuracy, while the NumPy library enables complex calculations, including distance estimation using focal length proportional laws. The audio feedback generation is based on the pyttsx3 library, which converts the names of recognized objects, along with distances, into spoken words. These techniques allow the system to learn automatically and improve its performance over time, resulting in better recognition accuracy. The Raspberry Pi acts as the central processing unit, managing all tasks, from image capture to analysis and response generation. The system is flexible and scalable, allowing for expanded features with text recognition and color identification capabilities. This advanced technology represents a breakthrough in the field of assistive technology, enhancing the independence of visually impaired people and supporting their active participation in various environments. This solution provides a real-time object recognition system.