A paper released through Apple Machine Learning Research talks about SceneScout, a multi-modal LLM-driven AI agent that can be used to view Street View imagery, analyze what is seen, and to describe it to the viewer. At the moment, pre-travel advice provides details like landmarks and turn-by-turn navigation, which do not provide much in the way of landscape context for visually impaired users. However, Street View style imagery, such as Apple Maps Look Around, often presents sighted users with a lot more contextual clues, which are often missed out on by people who cannot see it. This is where SceneScout steps in, as an AI agent to provide accessible interactions using Street View imagery. There are two modes to Scene Scout, with Route Preview providing details of elements it can observe on a route. For example, it could advise of trees at a turning and other more tactile elements to the user. A second mode, Virtual Exploration, is described as enabling free movement within Street View imagery, describing elements to the user as they virtually move. In its user study, the team determined that SceneScout is helpful to visually impaired people, in terms of uncovering information that they would not otherwise access using existing methods. If the research pans out, it could become a tool to help visually impaired people virtually explore a location in advance.