Tests Show Tesla’s ‘Full Self Driving’ Cars Require Human Intervention to Prevent Risky Behavior

Tests Show Tesla’s ‘Full Self Driving’ Cars Require Human Intervention to Prevent Risky Behavior

A recent investigation revealed that Tesla’s “Full⁤ Self Driving” ​feature still requires human intervention frequently ​to prevent potential​ risks. Latin Times reported on this discovery.

The study‌ involved driving over 1,000 miles in Southern California, where drivers had to ⁣intervene more ‌than 75 times to avert hazardous situations, as per ‌tests conducted by AMCI Testing.

This translates to ⁤an intervention roughly every 13 ​miles ⁢traveled.

The ‍evaluation focused on Tesla models equipped with⁣ Full Self Driving versions 12.5.1 and 12.5.3​ across various terrains like city streets, rural highways, interstates, and mountain roads.

Instances of risky behavior observed ‌included running‍ red lights and swerving into opposing lanes on winding ‌roads.

“Whether it’s due to insufficient computing power or lagging calculations causing the car to fall ‌behind, pinpointing the exact cause is challenging. ‍These failures are concerning,”‌ stated Guy ⁣Mangiamele, director of AMCI Testing in an interview with Ars Technica.

Mangiamele added, “Moreover, there are recurring ⁣issues stemming from basic programming‍ flaws like initiating ⁢lane changes too close ​to freeway exits which hampers the system’s performance and raises ⁢doubts about its core programming quality.”

The research also highlighted commendable achievements such as maneuvering into tight spaces between parked cars for another vehicle to pass⁣ through and navigating blind curves ⁤effectively.
Latin Times attempted reaching ‌out to Tesla for comments regarding the study but received no response at the time of reporting.

Tesla

2024-10-04 03:15:02
Read more on www.ibtimes.com

Exit mobile version