Groups Similar Search Look up By Text Browse About

Arizona and Intel team up on Institute for Automated Mobility to develop driverless vehicle technology

Arizona Governor Doug Ducey today announced that he has signed an executive order creating the Institute for Automated Mobility (IAM), a brain trust of enterprises, government agencies, and universities that will collaborate on autonomous vehicle testing in Arizona. IAM will comprise physical centers designed for complex research and scenarios, the governors office said during a press briefing this afternoon. A Traffic Incident Management division designed and run by the Arizona Department of Transportation and Arizona Department of Public Safety will investigate autonomous technologies for law enforcement and first responders. And at build-out, IAM will boast a simulation lab and an enclosed 2.1-mile test track with multiple route configurations, intersections, signage, and traffic signals. Among the consortiums founding members are Intel, the Arizona Department of Public Safety, and all three of Arizonas public universities — Arizona State University, the University of Arizona, and Northern Arizona University. Its work will be overseen by the Arizona Commerce Authority and Dr. Sethuraman Panchanathan, chief research and innovation officer at Arizona State University and the governors newly appointed Advisor for Science and Technology. The state has invested $1.5 million in IAM so far, and Intel is throwing in an undisclosed amount. IAM will provide a concierge-style service designed to help partners easily and effectively execute their R&D projects, Dr. Panchanathan said. The truly comprehensive operating model combined with a commitment to sharing data and best practices will ensure projects can achieve intellectual, economic development and societal outcomes. Arizona has played host to self-driving research efforts for the better part of three years, the governors office noted. The year 2015 marked the creation of the Self-Driving Oversight Committee, a team of transportation, public safety, and policy experts working to further in-state autonomous cars development. In 2016, a bevy of companies including Intel, Google spinoff Waymo, Ford, Uber, and the Cruise Automation arm of GM established bases of operation, and a year later, in 2017, Intel collaborated with Arizona State University on a study of autonomous vehicle safety. The accelerated pace of deployment reflects broader national momentum toward driverless car regulation. California in April expanded its testing rules to allow for remote monitoring instead of a safety driver inside the vehicle. In August, the city of Arlington, Texas signed a one-year contract with autonomous car startup And this week, Pennsylvanias Department of Transportation gave the green light to self-driving car startup Aurora, which will be the first officially authorized by the state to test its vehicles on public roads. To date, more than 20 states and the District of Columbia have passed laws regarding self-driving cars, and an additional 10 governors have issued executive orders. Theres been less movement at the federal level — a bill in the Senate, AV Start, has been twice defeated — but this year saw signs of progress. In March, President Donald Trump signed into law a $1.3 trillion spending bill that earmarks $100 million for projects that test the feasibility and safety of autonomous cars. And in early October, the Department of Transportation, through the National Highway Traffic Safety Administration, issued the third iteration of its voluntary guidelines on the development and safe deployment of driverless car technology: Automated Vehicles 3.0. In it, regulators posit new safety standards to accommodate automated vehicle technologies and the possibility of setting exceptions to certain standards … that are relevant only when human drivers are present. With IAM, Arizona is seeking to cement its foothold in a lucrative industry — and its not the only one. In April, Michigan partnered with Microsoft to open The American Center for Mobility, a nonprofit center autonomous vehicle research. The driverless car market is expected to be worth $54.33 billion in 2019 — a year before 10 million vehicles are expected to hit the road — and $556.67 billion by 2026, according to Allied Market Research, propelled by verticals such as food delivery and ride-hailing. About $80 billion has been invested in autonomous car research to date. But its had its challenges.  In March of this year, Uber suspended testing of its autonomous Volvo XC90 fleet after one of its cars struck and killed a pedestrian in Tempe, Arizona. An investigation by the National Transporation Safety Board later determined that Uber had deliberately disabled the cars emergency braking systems. Separately, Teslas Autopilot was found to have been engaged in the moments leading up to a fatal Model X collision this spring — the second fatality involving Autopilot since a crash in May 2016. (Tesla said that in the moments leading up to the most recent accident, the driver had received several visual and one audible cue to take back control of the car.) Critics contend that the autonomous car industry lacks an empirical, agreed-upon method of gauging in-vehicle safety. On Wednesday, the RAND Corporation published an Uber-commissioned report — Measuring Automated Vehicle Safety: Forging a Framework — that laid bare the challenges ahead. It suggests that local DMVs play a larger role in formalizing the demonstration process and proposes that companies and governments engage in data-sharing. Public confidence, unsurprisingly, is low. Three separate studies this summer — by the Brookings Institution, think tank HNTB, and the Advocates for Highway and Auto Safety (AHAS) — found that a majority of people arent convinced of driverless cars safety. More than 60 percent said they were not inclined to ride in self-driving cars, almost 70 percent expressed concerns about sharing the road with them, and 59 percent expected that self-driving cars will be no safer than human-controlled cars. Thats despite the fact that about 94 percent of car crashes are caused by human error and that in 2016 the top three causes of traffic fatalities were distracted driving, drunk driving, and speeding. According to the National Safety Council, Americans odds of dying in a car crash are one in 114. In 2016, motor vehicle deaths claimed 40,000 lives. Tel Aviv, Israel-based Mobileye, which Intel acquired in a $15.3 billion deal last April, proposed a solution — Responsibility-Sensitive Safety (RSS) — last October at the World Knowledge Forum in Seoul, South Korea. An accompanying whitepaper describes it as a deterministic … formula with logically provable rules of the road intended to prevent self-driving vehicles from causing accidents. Intel characterizes it as a common sense approach to on-the-road decision-making that codifies good habits, like maintaining a safe following distance and giving other cars the right of way. The ability to assign fault is the key. Just like the best human drivers in the world, self-driving cars cannot avoid accidents due to actions beyond their control, Amnon Shashua, Mobileye CEO and Intel senior vice president, said in a statement last year. But the most responsible, aware, and cautious driver is very unlikely to cause an accident of his or her own fault, particularly if they had 360-degree vision and lightning-fast reaction times like autonomous vehicles will. During todays press call, Dr. Panchanathan expressed optimism that IAMs future work, building on foundational approaches like RSS, will help to move the needle forward. We have the spirit of collaboration which is seamless between corporate industry, government, he said. All disciplines have to come together … to truly build a sustainable and innovative solution.

Self-driving cars need a common language to talk about safety, or they will fail

Theres been a lot of talk lately about the need for a common language when it comes to self-driving cars. Ford recently came out in favor of standardized visual cues that autonomous vehicles could use to communicate intent to pedestrians, bicyclists, and other drivers. Meanwhile, critics continue to assail the five levels of automation as defined by the Society of Automotive Engineers, the global standard for self-driving, for being overly broad and possibly dangerous. Most experts agree: we need a better, more unified way to talk about self-driving cars. Today, the RAND Corporation unveiled its own well-researched attempt to introduce a common language for autonomous vehicles. Titled Measuring Automated Vehicle Safety: Forging a Framework, the 91-page document seeks to answer the burning question: can fierce rivals find common ways to measure safety that would be helpful to the public? After all, that is the core obstacle to any effort to standardize anything in the self-driving space. Companies like Waymo, Tesla, GM, Ford, and Uber would sooner sue the competition into oblivion than gather round the campfire and sing kumbaya. These companies have invested billions of dollars in research and development ($80 billion, according to the Brookings Institute), in the hopes of reaping the rewards of a potential $7 trillion industry. Why should they agree to anything that could level the playing field for their competitors and eliminate their own advantages? For Marjory Blumenthal, senior policy analyst at RAND and lead author of the report, the answer is pretty simple: there wont be any self-driving cars if people dont feel safe enough to ride in them. Theres not the greatest degree of transparency, Blumenthal told The Verge. So it seems like its a good time to provide a way so that companies could be encouraged to find some commonality in the way they talk about how and why their vehicles are safe. The number of autonomous vehicles available to the public today is infinitesimal — there are only a handful of public trials going on in the US, Europe, Russia, and China — but the public is growing increasingly skeptical of this new technology. In March, a self-driving Uber vehicle struck and killed a pedestrian in Tempe, Arizona, while the backup safety driver was streaming a video on her phone, police said. Uber suspended testing in the aftermath, and some safety advocates said the crash showed the system was not yet safe enough to be tested on public roads. The success of autonomous vehicles requires public trust, Blumenthal said. And right now, autonomous vehicle development is happening along different paths, and so having a common reference point can help the development community move toward safer vehicles and promote that public trust. Ironically, RAND took on the task of creating a shared language for self-driving cars at the request of Ubers Advanced Technology Group, which operates the ride-hailing giants AV fleet. The company approached RAND in the summer of 2017, almost a year before the fatal Tempe crash, with the request to develop a company-neutral framework for AV safety. Blumenthal and her team set out to talk to a wide array of stakeholders, including engineers at Tesla, Waymo, and Toyota, as well as researchers, public safety advocates, and government officials. RAND starts out by defining the three stages in the life cycle of self-driving cars: development, demonstration, and deployment. It also considers safety measurements such as crashes, infractions (like running a red light), and a new measure called roadmanship, which measures if the vehicle is a good citizen of the roadway (e.g., plays well with others). A formal definition of roadmanship is needed before AVs are tested in public, RAND recommends. Other considerations include where the safety measurements were taken — in simulation, on a closed course or proving ground, or out in the wild, on public roads with or without a safety driver. The operational domain design of self-driving cars can also take into account a variety of external conditions, such as geography, weather, lighting, road markings, and other factors. Throughout its report, RAND gently chides AV companies for the way they talk about self-driving cars in utopian terms. Unrealistic claims of near perfection can warp the publics perception about what AVs can and cannot accomplish. Claims that mass adoption of AVs can lower the number of annual motor vehicle deaths can be undone by even a single crash. We saw this with the Uber crash in March, after which public support for AVs dropped precipitously. The federal government is taking a backseat to self-driving cars, rewriting its own rules to incentivize their deployment and basically passing the buck to the states in terms of regulation and enforcement. As such, RAND suggests that local DMVs may want to play a larger role in formalizing the demonstration process, much like California does by requiring licenses to test AVs on public roads. RAND also recommends more data-sharing between companies and with government agencies — a suggestion that is sure to be met with silence from the private sector. Companies are reluctant to publicize their data for fear of exposing important trade secrets. But Blumenthal and her team are optimistic. There is hope of more collective action among competitors, the report concludes, what some might call coopetition.