Release Notes for Update 2022.20

Containing 2022.20, 2022.20.5, 2022.20.6, 2022.20.5.1, 2022.20.7, 2022.20.8, 2022.20.9, 2022.20.10, 2022.20.11, 2022.20.15, 2022.20.12, 2022.20.16, 2022.20.17, 2022.20.12.1, 2022.20.18, 2022.20.19

Full Self-Driving Beta 10.69.2.4

Version 2022.20.19
Install Statistics

Installed on 3 cars

Pending on 0 cars

View More Statistics
Share Release Notes
No Release Notes

The release notes for this version aren't available, check back soon.

Did you know; If you sign up for a Tesla Updates account, you'll receive an email whenever we publish the release notes for a new version.

Full Self-Driving Beta 10.69.2.3

Version 2022.20.18
Install Statistics

Installed on 1,106 cars in 6 countries

Pending on 19 cars

View More Statistics
Share Release Notes
No Release Notes

The release notes for this version aren't available, check back soon.

Did you know; If you sign up for a Tesla Updates account, you'll receive an email whenever we publish the release notes for a new version.

Full Self-Driving Beta 10.69.2.2

Version 2022.20.17
This Update
Install Statistics

Installed on 39 cars in 6 countries

Pending on 2 cars

View More Statistics
Share Release Notes
FSD Beta v10.69.2.1

โ€“ Added a new "deep lane guidance" module to the Vector Lanes
neural network which fuses features extracted from the video
streams with coarse map data, i.e. lane counts and lane
connectivities. This architecture achieves a 44% lower error rate on
lane topology compared to the previous model, enabling smoother
control before lanes and their connectivities becomes visually
apparent. This provides a way to make every Autopilot drive as
good as someone driving their own commute, yet in a sufficiently
general way that adapts for road changes.

โ€“ Improved overall driving smoothness, without sacrificing latency,
through better modelling of system and actuation latency in
trajectory planning. Trajectory planner now independently accounts
for latency from steering commands to actual steering actuation, as
well as acceleration and brake commands to actuation. This results
in a trajectory that is a more accurate model of how the vehicle
would drive. This allows better downstream controller tracking and
smoothness while also allowing a more accurate response during
harsh manoeuvres.

โ€“ Improved unprotected left turns with more appropriate speed
profile when approaching and exiting median crossover regions, in
the presence of high speed cross traffic ("Chuck Cook style"
unprotected left turns). This was done by allowing optimisable initial
jerk, to mimic the harsh pedal press by a human, when required to
go in front of high speed objects. Also improved lateral profile
approaching such safety regions to allow for better pose that aligns
well for exiting the region. Finally, improved interaction with objects
that are entering or waiting inside the median crossover region with
better modelling of their future intent.

โ€“ Added control for arbitrary low-speed moving volumes from
Occupancy Network. This also enables finer control for more
precise object shapes that cannot be easily represented by a
cuboid primitive. This required predicting velocity at every 3D
voxel. We may now control for slow-moving UFOs.

โ€“ Upgraded Occupancy Network to use video instead of images
from single time step. This temporal context allows the network to
be robust to temporary occlusions and enables prediction of
occupancy flow. Also, improved ground truth with semantics-driven
outlier rejection, hard example mining, and increasing the dataset
size by 2.4x.

โ€“ Upgraded to a new two-stage architecture to produce object
kinematics (e.g. velocity, acceleration, yaw rate) where network
compute is allocated O(objects) instead of O(space). This improved
velocity estimates for far away crossing vehicles by 20%, while
using one tenth of the compute.

โ€“ Increased smoothness for protected right turns by improving the
association of traffic lights with slip lanes vs yield signs with slip
lanes. This reduces false slowdowns when there are no relevant
objects present and also improves yielding position when they are
present.

โ€“ Reduced false slowdowns near crosswalks. This was done with
improved understanding of pedestrian and bicyclist intent based on
their motion.

โ€“ Improved geometry error of ego-relevant lanes by 34% and
crossing lanes by 21% with a full Vector Lanes neural network
update. Information bottlenecks in the network architecture were
eliminated by increasing the size of the per-camera feature
extractors, video modules, internals of the autoregressive decoder,
and by adding a hard attention mechanism which greatly improved
the fine position of lanes.

โ€“ Made speed profile more comfortable when creeping for visibility,
to allow for smoother stops when protecting for potentially
occluded objects.

โ€“ Improved recall of animals by 34% by doubling the size of the
auto-labeled training set.

โ€“ Enabled creeping for visibility at any intersection where objects
might cross ego's path, regardless of presence of traffic controls.
- Improved accuracy of stopping position in critical scenarios with
crossing objects, by allowing dynamic resolution in trajectory
optimization to focus more on areas where finer control is essential.

โ€“ Increased recall of forking lanes by 36% by having topological
tokens participate in the attention operations of the autoregressive
decoder and by increasing the loss applied to fork tokens during
training.

โ€“ Improved velocity error for pedestrians and bicyclists by 17%,
especially when ego is making a turn, by improving the onboard
trajectory estimation used as input to the neural network.

โ€“ Improved object future path prediction in scenarios with high yaw
rate by incorporating yaw rate and lateral motion into the likelihood
estimation. This helps with objects turning into or away from ego's
lane, especially in intersections or cut-in scenarios.

โ€“ Improved speed when entering highway by better handling of
upcoming map speed changes, which increases the confidence of
merging onto the highway.

โ€“ Reduced latency when starting from a stop by accounting for lead
vehicle jerk.

โ€“ Enabled faster identification of red light runners by evaluating
their current kinematic state against their expected braking profile.


Availability:

Available in United States


Models:

S

3

X

Y

Full Self-Driving Beta 10.69.2.1

Version 2022.20.16
This Update
Install Statistics

Installed on 0 cars in 1 country

Pending on 0 cars

View More Statistics
Share Release Notes
FSD Beta v10.69.2.1

โ€“ Added a new "deep lane guidance" module to the Vector Lanes
neural network which fuses features extracted from the video
streams with coarse map data, i.e. lane counts and lane
connectivities. This architecture achieves a 44% lower error rate on
lane topology compared to the previous model, enabling smoother
control before lanes and their connectivities becomes visually
apparent. This provides a way to make every Autopilot drive as
good as someone driving their own commute, yet in a sufficiently
general way that adapts for road changes.

โ€“ Improved overall driving smoothness, without sacrificing latency,
through better modelling of system and actuation latency in
trajectory planning. Trajectory planner now independently accounts
for latency from steering commands to actual steering actuation, as
well as acceleration and brake commands to actuation. This results
in a trajectory that is a more accurate model of how the vehicle
would drive. This allows better downstream controller tracking and
smoothness while also allowing a more accurate response during
harsh manoeuvres.

โ€“ Improved unprotected left turns with more appropriate speed
profile when approaching and exiting median crossover regions, in
the presence of high speed cross traffic ("Chuck Cook style"
unprotected left turns). This was done by allowing optimisable initial
jerk, to mimic the harsh pedal press by a human, when required to
go in front of high speed objects. Also improved lateral profile
approaching such safety regions to allow for better pose that aligns
well for exiting the region. Finally, improved interaction with objects
that are entering or waiting inside the median crossover region with
better modelling of their future intent.

โ€“ Added control for arbitrary low-speed moving volumes from
Occupancy Network. This also enables finer control for more
precise object shapes that cannot be easily represented by a
cuboid primitive. This required predicting velocity at every 3D
voxel. We may now control for slow-moving UFOs.

โ€“ Upgraded Occupancy Network to use video instead of images
from single time step. This temporal context allows the network to
be robust to temporary occlusions and enables prediction of
occupancy flow. Also, improved ground truth with semantics-driven
outlier rejection, hard example mining, and increasing the dataset
size by 2.4x.

โ€“ Upgraded to a new two-stage architecture to produce object
kinematics (e.g. velocity, acceleration, yaw rate) where network
compute is allocated O(objects) instead of O(space). This improved
velocity estimates for far away crossing vehicles by 20%, while
using one tenth of the compute.

โ€“ Increased smoothness for protected right turns by improving the
association of traffic lights with slip lanes vs yield signs with slip
lanes. This reduces false slowdowns when there are no relevant
objects present and also improves yielding position when they are
present.

โ€“ Reduced false slowdowns near crosswalks. This was done with
improved understanding of pedestrian and bicyclist intent based on
their motion.

โ€“ Improved geometry error of ego-relevant lanes by 34% and
crossing lanes by 21% with a full Vector Lanes neural network
update. Information bottlenecks in the network architecture were
eliminated by increasing the size of the per-camera feature
extractors, video modules, internals of the autoregressive decoder,
and by adding a hard attention mechanism which greatly improved
the fine position of lanes.

โ€“ Made speed profile more comfortable when creeping for visibility,
to allow for smoother stops when protecting for potentially
occluded objects.

โ€“ Improved recall of animals by 34% by doubling the size of the
auto-labeled training set.

โ€“ Enabled creeping for visibility at any intersection where objects
might cross ego's path, regardless of presence of traffic controls.
- Improved accuracy of stopping position in critical scenarios with
crossing objects, by allowing dynamic resolution in trajectory
optimization to focus more on areas where finer control is essential.

โ€“ Increased recall of forking lanes by 36% by having topological
tokens participate in the attention operations of the autoregressive
decoder and by increasing the loss applied to fork tokens during
training.

โ€“ Improved velocity error for pedestrians and bicyclists by 17%,
especially when ego is making a turn, by improving the onboard
trajectory estimation used as input to the neural network.

โ€“ Improved object future path prediction in scenarios with high yaw
rate by incorporating yaw rate and lateral motion into the likelihood
estimation. This helps with objects turning into or away from ego's
lane, especially in intersections or cut-in scenarios.

โ€“ Improved speed when entering highway by better handling of
upcoming map speed changes, which increases the confidence of
merging onto the highway.

โ€“ Reduced latency when starting from a stop by accounting for lead
vehicle jerk.

โ€“ Enabled faster identification of red light runners by evaluating
their current kinematic state against their expected braking profile.


Availability:

Available in United States


Models:

S

3

X

Y

Full Self-Driving Beta 10.69.2

Version 2022.20.15
This Update
Install Statistics

Installed on 14 cars in 5 countries

Pending on 1 car

View More Statistics
Share Release Notes
FSD Beta v10.69.2

โ€“ Added a new "deep lane guidance" module to the Vector Lanes
neural network which fuses features extracted from the video
streams with coarse map data, i.e. lane counts and lane
connectivities. This architecture achieves a 44% lower error rate on
lane topology compared to the previous model, enabling smoother
control before lanes and their connectivities becomes visually
apparent. This provides a way to make every Autopilot drive as
good as someone driving their own commute, yet in a sufficiently
general way that adapts for road changes.

โ€“ Improved overall driving smoothness, without sacrificing latency,
through better modelling of system and actuation latency in
trajectory planning. Trajectory planner now independently accounts
for latency from steering commands to actual steering actuation, as
well as acceleration and brake commands to actuation. This results
in a trajectory that is a more accurate model of how the vehicle
would drive. This allows better downstream controller tracking and
smoothness while also allowing a more accurate response during
harsh manoeuvres.

โ€“ Improved unprotected left turns with more appropriate speed
profile when approaching and exiting median crossover regions, in
the presence of high speed cross traffic ("Chuck Cook style"
unprotected left turns). This was done by allowing optimisable initial
jerk, to mimic the harsh pedal press by a human, when required to
go in front of high speed objects. Also improved lateral profile
approaching such safety regions to allow for better pose that aligns
well for exiting the region. Finally, improved interaction with objects
that are entering or waiting inside the median crossover region with
better modelling of their future intent.

โ€“ Added control for arbitrary low-speed moving volumes from
Occupancy Network. This also enables finer control for more
precise object shapes that cannot be easily represented by a
cuboid primitive. This required predicting velocity at every 3D
voxel. We may now control for slow-moving UFOs.

โ€“ Upgraded Occupancy Network to use video instead of images
from single time step. This temporal context allows the network to
be robust to temporary occlusions and enables prediction of
occupancy flow. Also, improved ground truth with semantics-driven
outlier rejection, hard example mining, and increasing the dataset
size by 2.4x.

โ€“ Upgraded to a new two-stage architecture to produce object
kinematics (e.g. velocity, acceleration, yaw rate) where network
compute is allocated O(objects) instead of O(space). This improved
velocity estimates for far away crossing vehicles by 20%, while
using one tenth of the compute.

โ€“ Increased smoothness for protected right turns by improving the
association of traffic lights with slip lanes vs yield signs with slip
lanes. This reduces false slowdowns when there are no relevant
objects present and also improves yielding position when they are
present.

โ€“ Reduced false slowdowns near crosswalks. This was done with
improved understanding of pedestrian and bicyclist intent based on
their motion.

โ€“ Improved geometry error of ego-relevant lanes by 34% and
crossing lanes by 21% with a full Vector Lanes neural network
update. Information bottlenecks in the network architecture were
eliminated by increasing the size of the per-camera feature
extractors, video modules, internals of the autoregressive decoder,
and by adding a hard attention mechanism which greatly improved
the fine position of lanes.

โ€“ Made speed profile more comfortable when creeping for visibility,
to allow for smoother stops when protecting for potentially
occluded objects.

โ€“ Improved recall of animals by 34% by doubling the size of the
auto-labeled training set.

โ€“ Enabled creeping for visibility at any intersection where objects
might cross ego's path, regardless of presence of traffic controls.
- Improved accuracy of stopping position in critical scenarios with
crossing objects, by allowing dynamic resolution in trajectory
optimization to focus more on areas where finer control is essential.

โ€“ Increased recall of forking lanes by 36% by having topological
tokens participate in the attention operations of the autoregressive
decoder and by increasing the loss applied to fork tokens during
training.

โ€“ Improved velocity error for pedestrians and bicyclists by 17%,
especially when ego is making a turn, by improving the onboard
trajectory estimation used as input to the neural network.

โ€“ Improved object future path prediction in scenarios with high yaw
rate by incorporating yaw rate and lateral motion into the likelihood
estimation. This helps with objects turning into or away from ego's
lane, especially in intersections or cut-in scenarios.

โ€“ Improved speed when entering highway by better handling of
upcoming map speed changes, which increases the confidence of
merging onto the highway.

โ€“ Reduced latency when starting from a stop by accounting for lead
vehicle jerk.

โ€“ Enabled faster identification of red light runners by evaluating
their current kinematic state against their expected braking profile.


Availability:

Available in United States


Models:

S

3

X

Y

Update 2022.20.12.1

Install Statistics

Installed on 1 car

Pending on 0 cars

View More Statistics
Share Release Notes
No Release Notes

The release notes for this version aren't available, check back soon.

Did you know; If you sign up for a Tesla Updates account, you'll receive an email whenever we publish the release notes for a new version.

Update 2022.20.12

Install Statistics

Installed on 1 car

Pending on 0 cars

View More Statistics
Share Release Notes
No Release Notes

The release notes for this version aren't available, check back soon.

Did you know; If you sign up for a Tesla Updates account, you'll receive an email whenever we publish the release notes for a new version.

Full Self-Driving Beta 10.69.1.1

Version 2022.20.11
This Update
Install Statistics

Installed on 1 car in 2 countries

Pending on 0 cars

View More Statistics
Share Release Notes
FSD Beta v10.69.1.1

โ€“ Added a new "deep lane guidance" module to the Vector Lanes
neural network which fuses features extracted from the video
streams with coarse map data, i.e. lane counts and lane
connectivities. This architecture achieves a 44% lower error rate on
lane topology compared to the previous model, enabling smoother
control before lanes and their connectivities becomes visually
apparent. This provides a way to make every Autopilot drive as
good as someone driving their own commute, yet in a sufficiently
general way that adapts for road changes.

โ€“ Improved overall driving smoothness, without sacrificing latency,
through better modelling of system and actuation latency in
trajectory planning. Trajectory planner now independently accounts
for latency from steering commands to actual steering actuation, as
well as acceleration and brake commands to actuation. This results
in a trajectory that is a more accurate model of how the vehicle
would drive. This allows better downstream controller tracking and
smoothness while also allowing a more accurate response during
harsh manoeuvres.

โ€“ Improved unprotected left turns with more appropriate speed
profile when approaching and exiting median crossover regions, in
the presence of high speed cross traffic ("Chuck Cook style"
unprotected left turns). This was done by allowing optimisable initial
jerk, to mimic the harsh pedal press by a human, when required to
go in front of high speed objects. Also improved lateral profile
approaching such safety regions to allow for better pose that aligns
well for exiting the region. Finally, improved interaction with objects
that are entering or waiting inside the median crossover region with
better modelling of their future intent.

โ€“ Added control for arbitrary low-speed moving volumes from
Occupancy Network. This also enables finer control for more
precise object shapes that cannot be easily represented by a
cuboid primitive. This required predicting velocity at every 3D
voxel. We may now control for slow-moving UFOs.

โ€“ Upgraded Occupancy Network to use video instead of images
from single time step. This temporal context allows the network to
be robust to temporary occlusions and enables prediction of
occupancy flow. Also, improved ground truth with semantics-driven
outlier rejection, hard example mining, and increasing the dataset
size by 2.4x.

โ€“ Upgraded to a new two-stage architecture to produce object
kinematics (e.g. velocity, acceleration, yaw rate) where network
compute is allocated O(objects) instead of O(space). This improved
velocity estimates for far away crossing vehicles by 20%, while
using one tenth of the compute.

โ€“ Increased smoothness for protected right turns by improving the
association of traffic lights with slip lanes vs yield signs with slip
lanes. This reduces false slowdowns when there are no relevant
objects present and also improves yielding position when they are
present.

โ€“ Reduced false slowdowns near crosswalks. This was done with
improved understanding of pedestrian and bicyclist intent based on
their motion.

โ€“ Improved geometry error of ego-relevant lanes by 34% and
crossing lanes by 21% with a full Vector Lanes neural network
update. Information bottlenecks in the network architecture were
eliminated by increasing the size of the per-camera feature
extractors, video modules, internals of the autoregressive decoder,
and by adding a hard attention mechanism which greatly improved
the fine position of lanes.

โ€“ Made speed profile more comfortable when creeping for visibility,
to allow for smoother stops when protecting for potentially
occluded objects.

โ€“ Improved recall of animals by 34% by doubling the size of the
auto-labeled training set.

โ€“ Enabled creeping for visibility at any intersection where objects
might cross ego's path, regardless of presence of traffic controls.
- Improved accuracy of stopping position in critical scenarios with
crossing objects, by allowing dynamic resolution in trajectory
optimization to focus more on areas where finer control is essential.

โ€“ Increased recall of forking lanes by 36% by having topological
tokens participate in the attention operations of the autoregressive
decoder and by increasing the loss applied to fork tokens during
training.

โ€“ Improved velocity error for pedestrians and bicyclists by 17%,
especially when ego is making a turn, by improving the onboard
trajectory estimation used as input to the neural network.

โ€“ Improved object future path prediction in scenarios with high yaw
rate by incorporating yaw rate and lateral motion into the likelihood
estimation. This helps with objects turning into or away from ego's
lane, especially in intersections or cut-in scenarios.

โ€“ Improved speed when entering highway by better handling of
upcoming map speed changes, which increases the confidence of
merging onto the highway.

โ€“ Reduced latency when starting from a stop by accounting for lead
vehicle jerk.

โ€“ Enabled faster identification of red light runners by evaluating
their current kinematic state against their expected braking profile.


Availability:

Available in United States


Models:

S

3

X

Y

Full Self-Driving Beta 10.69.1

Version 2022.20.10
Install Statistics

Installed on 0 cars in 1 country

Pending on 0 cars

View More Statistics
Share Release Notes
FSD Beta v10.69.1

โ€“ Added a new "deep lane guidance" module to the Vector Lanes
neural network which fuses features extracted from the video
streams with coarse map data, i.e. lane counts and lane
connectivities. This architecture achieves a 44% lower error rate on
lane topology compared to the previous model, enabling smoother
control before lanes and their connectivities becomes visually
apparent. This provides a way to make every Autopilot drive as
good as someone driving their own commute, yet in a sufficiently
general way that adapts for road changes.

โ€“ Improved overall driving smoothness, without sacrificing latency,
through better modelling of system and actuation latency in
trajectory planning. Trajectory planner now independently accounts
for latency from steering commands to actual steering actuation, as
well as acceleration and brake commands to actuation. This results
in a trajectory that is a more accurate model of how the vehicle
would drive. This allows better downstream controller tracking and
smoothness while also allowing a more accurate response during
harsh manoeuvres.

โ€“ Improved unprotected left turns with more appropriate speed
profile when approaching and exiting median crossover regions, in
the presence of high speed cross traffic ("Chuck Cook style"
unprotected left turns). This was done by allowing optimisable initial
jerk, to mimic the harsh pedal press by a human, when required to
go in front of high speed objects. Also improved lateral profile
approaching such safety regions to allow for better pose that aligns
well for exiting the region. Finally, improved interaction with objects
that are entering or waiting inside the median crossover region with
better modelling of their future intent.

โ€“ Added control for arbitrary low-speed moving volumes from
Occupancy Network. This also enables finer control for more
precise object shapes that cannot be easily represented by a
cuboid primitive. This required predicting velocity at every 3D
voxel. We may now control for slow-moving UFOs.

โ€“ Upgraded Occupancy Network to use video instead of images
from single time step. This temporal context allows the network to
be robust to temporary occlusions and enables prediction of
occupancy flow. Also, improved ground truth with semantics-driven
outlier rejection, hard example mining, and increasing the dataset
size by 2.4x.

โ€“ Upgraded to a new two-stage architecture to produce object
kinematics (e.g. velocity, acceleration, yaw rate) where network
compute is allocated O(objects) instead of O(space). This improved
velocity estimates for far away crossing vehicles by 20%, while
using one tenth of the compute.

โ€“ Increased smoothness for protected right turns by improving the
association of traffic lights with slip lanes vs yield signs with slip
lanes. This reduces false slowdowns when there are no relevant
objects present and also improves yielding position when they are
present.

โ€“ Reduced false slowdowns near crosswalks. This was done with
improved understanding of pedestrian and bicyclist intent based on
their motion.

โ€“ Improved geometry error of ego-relevant lanes by 34% and
crossing lanes by 21% with a full Vector Lanes neural network
update. Information bottlenecks in the network architecture were
eliminated by increasing the size of the per-camera feature
extractors, video modules, internals of the autoregressive decoder,
and by adding a hard attention mechanism which greatly improved
the fine position of lanes.

โ€“ Made speed profile more comfortable when creeping for visibility,
to allow for smoother stops when protecting for potentially
occluded objects.

โ€“ Improved recall of animals by 34% by doubling the size of the
auto-labeled training set.

โ€“ Enabled creeping for visibility at any intersection where objects
might cross ego's path, regardless of presence of traffic controls.
- Improved accuracy of stopping position in critical scenarios with
crossing objects, by allowing dynamic resolution in trajectory
optimization to focus more on areas where finer control is essential.

โ€“ Increased recall of forking lanes by 36% by having topological
tokens participate in the attention operations of the autoregressive
decoder and by increasing the loss applied to fork tokens during
training.

โ€“ Improved velocity error for pedestrians and bicyclists by 17%,
especially when ego is making a turn, by improving the onboard
trajectory estimation used as input to the neural network.

โ€“ Improved object future path prediction in scenarios with high yaw
rate by incorporating yaw rate and lateral motion into the likelihood
estimation. This helps with objects turning into or away from ego's
lane, especially in intersections or cut-in scenarios.

โ€“ Improved speed when entering highway by better handling of
upcoming map speed changes, which increases the confidence of
merging onto the highway.

โ€“ Reduced latency when starting from a stop by accounting for lead
vehicle jerk.

โ€“ Enabled faster identification of red light runners by evaluating
their current kinematic state against their expected braking profile.


Availability:

Available in United States


Models:

S

3

X

Y

Full Self-Driving (Beta) Suspension

We have reset the "Forced Autopilot Disengagements" counter on
your vehicle to 0.

For maximum safety and accountability, use of Full Self-Driving (Beta) will be suspended if improper usage is detected. Improper usage is when you, or another driver of your vehicle, receive five 'Forced Autopilot Disengagements'. A disengagement is when the Autopilot system disengages for the remainder of a trip after the driver receives several audio and visual warnings for inattentiveness. Driver-initiated disengagements do not count as improper usage and are expected from the driver. Keep your hands on the wheel and remain attentive at all times. Use of any hand-held devices while using Autopilot is not allowed.


Availability:

Available in United States


Models:

S

3

X

Y

Seat Belt System Enhancement

This enhancement builds upon your vehicleโ€™s superior crash protection โ€“ based upon regulatory and industry standard crash testing โ€“ by now using Tesla Vision to help offer some of the most cutting-edge seat belt pretensioner performance in the event of a frontal crash. Your seat belts will now begin to tighten and protect properly restrained occupants earlier in a wider array of frontal crashes.


Availability:

Available Worldwide

Models:

S

3

X

Y

Cabin Camera

The cabin camera above your rearview mirror can now determine driver inattentiveness and provide you with audible alerts, to remind you to keep your eyes on the road when Autopilot is engaged. Camera images do not leave the vehicle itself, which means the system cannot save or transmit information unless you enable data sharing. To change your data settings, tap Controls > Software > Data Sharing on your car's touchscreen. Cabin camera does not perform facial recognition or any other method of identity verification.


Availability:

Available in United States


Models:

S

3

X

Y

Tire Configuration

Reset the learned tyre settings directly after a tire rotation, swap or replacement to improve your driving experience. To reset, tap Controls > Service > Wheel & Tire Configuration > Tires.


Availability:

Available Worldwide

Models:

3

Y

Update 2022.20.9

Install Statistics

Installed on 60 cars in 4 countries

Pending on 4 cars

View More Statistics
Share Release Notes
Dynamic Brake Lights

If you are driving over 50 km/h (31 mph) and brake forcefully, the brake lights will now flash quickly to warn other drivers that your car is rapidly slowing down. If your car stops completely, the hazard warning lights will flash until you press the accelerator or manually press the hazard warning lights button to turn them off.

Now available in Australia and New Zealand.


Availability:

Unknown Availability in United States


Models:

3

Y

Speed Assist

Your vehicle is now running Tesla Vision! It will rely on camera vision coupled with neural net processing to deliver certain Autopilot and active safety features. Vehicles using Tesla Vision have received top safety ratings, and fleet data shows that it provides overall enhanced safety for our customers. Note that, with Tesla Vision, available following distance settings are from 2-7 and Autosteer top speed is 85 mph (140 km/h).


Availability:

Available in United States


Update 2022.20.8

This Update
Install Statistics

Installed on 143 cars in 48 countries

Pending on 12 cars

View More Statistics
Share Release Notes
Minor Fixes

This release contains minor bug fixes and improvements.


Availability:

Available in United States


Models:

S

3

X

Y

Update 2022.20.7

This Update
Install Statistics

Installed on 55 cars in 46 countries

Pending on 1 car

View More Statistics
Share Release Notes
Minor Fixes

This release contains minor bug fixes and improvements.


Availability:

Available in United States


Models:

S

3

X

Y

Update 2022.20.6

This Update
Install Statistics

Installed on 18 cars in 42 countries

Pending on 1 car

View More Statistics
Share Release Notes
Minor Fixes

This release contains minor bug fixes and improvements.


Availability:

Available in United States


Models:

S

3

X

Y

Update 2022.20.5.1

This Update
Install Statistics

Installed on 1 car

Pending on 0 cars

View More Statistics
Share Release Notes
Minor Fixes

This release contains minor bug fixes and improvements.


Availability:

Available in United States


Models:

S

3

X

Y

Update 2022.20.5

Cabin Camera

The cabin camera above your rearview mirror can now determine driver inattentiveness and provide you with audible alerts, to remind you to keep your eyes on the road when Autopilot is engaged. Camera images do not leave the vehicle itself, which means the system cannot save or transmit information unless you enable data sharing. To change your data settings, tap Controls > Software > Data Sharing on your car's touchscreen. Cabin camera does not perform facial recognition or any other method of identity verification.


Availability:

Available in United States


Models:

S

3

X

Y

Turkish Voice Navigation

Your navigation voice guidance is now available in Turkish. To switch your language setting, tap Controls > Display > Voice Navigation Language.


Availability:

Available Worldwide

Models:

S

3

X

Y

Tyre Configuration

Reset the learned tyre settings directly after a tyre rotation, swap or replacement to improve your driving experience. To reset, tap Controls > Service > Wheel & Tyre Configuration > Tyres.


Availability:

Available Worldwide

Models:

3

Tesla Adaptive Suspension

Tesla Adaptive Suspension will now adjust ride height for an upcoming rough road section. This adjustment may occur at various locations, subject to availability, as the vehicle downloads rough road map data generated by Tesla cars. The instrument cluster will continue to indicate when the suspension is raised for comfort. To enable this feature, tap Controls > Suspension > Adaptive Suspension Damping, and select the Comfort or Auto setting.


Availability:

Available in United States


Models:

Refresh S

Refresh X

Sentry Mode

Sentry Mode continuously monitors your carโ€™s surroundings while itโ€™s locked and parked. When enabled, the car automatically enters the Standby state while its cameras and sensors remain powered to detect potential threats and trigger an appropriate response state: Alert or Panic. To enable Sentry Mode, go to Controls > Safety > Sentry Mode.
If a minimal threat is detected, such as someone leaning on your car, Sentry Mode switches to the Alert state, displaying a message on your touchscreen indicating that cameras are recording.
If a major threat is detected, such as someone breaking a window, Sentry Mode switches to the Panic state. In this state, the touchscreen increases to maximum brightness, and you receive a notification on your mobile app.
To save the video clip captured while in Sentry Mode, you must insert a formatted USB flash drive into one of your USB ports beforehand. Sentry Mode requires more than 20% battery to operate. If your battery falls below 20% while the feature is active, Sentry Mode turns off and you receive a notification on your mobile app.
Note that Sentry Mode is designed to enhance the security of your car, but cannot protect your car from all possible threats.

NEW: Now available in Israel


Availability:

Available in United States


Models:

S

3

X

Y

Seat Belt System Enhancement

This enhancement builds upon your vehicleโ€™s superior crash protection โ€“ based upon regulatory and industry standard crash testing โ€“ by now using Tesla Vision to help offer some of the most cutting-edge seat belt pretensioner performance in the event of a frontal crash. Your seat belts will now begin to tighten and protect properly restrained occupants earlier in a wider array of frontal crashes.


Availability:

Available Worldwide

Models:

Refresh S

Refresh X

Y

Polish Voice Navigation

Your navigation voice guidance is now available in Polish. To switch your language setting, tap Controls > Display > Voice Navigation Language.


Availability:

Available Worldwide

Models:

S

3

X

Y

Green Traffic Light Chime

A chime will play when the traffic light you are waiting for turns green. If you are waiting behind another vehicle, the chime will play once the vehicle advances unless Traffic-Aware Cruise Control or Autosteer is active.
Note: This chime is only designed as a notification. It is the driverโ€™s responsibility to observe their environment and make decisions accordingly.


Availability:

Available Worldwide

Models:

S

3

X

Y

Speed Assist

Speed Assist now leverages your car's cameras to detect speed limit signs. This improves the accuracy of speed limit data on local roads and highways in select countries. Detected speed limit signs will be displayed in the driving visualisation.


Availability:

Unknown Availability in United States


Models:

S

3

X

Y

Update 2022.20

This Update
Install Statistics

Installed on 1 car in 4 countries

Pending on 0 cars

View More Statistics
Share Release Notes
Car Colouriser

Customise how your car appears on the touchscreen and mobile app with the Car Colouriser. Change the colour of your carโ€™s exterior by tapping Controls > Software > Colouriser icon, or using Colouriser in the ToyBox.


Availability:

Available in United States


Models:

S

3

X

Y