XPENG becomes the only Chinese automaker invited to present at the 2025 CVPR autonomous driving workshop
- XPENG invited to attend CVPR WAD, a top global autonomous driving workshop
- XPENG was the sole Chinese automaker invited by the conference this year to share its autonomous driving R&D progress, sharing the stage with industry leaders like Waymo and NVIDIA
- XPENG delivered a speech titled "Scaling up Autonomous Driving via Large Foundation Models". On the same day, XPENG announced the launch of the world’s first AI-powered car, the G7, equipped with an L3 computing platform, in China
- XPENG has also proposed two key standards for L3 computing platforms: Effective computing power exceeding 2000 TOPS and Onboard deployment of "VLA + VLM models"
GUANGZHOU, China, June 30, 2025 (GLOBE NEWSWIRE) -- XPENG Motors (“XPENG” or the “Company,” NYSE: XPEV and HKEX: 9868), a leading China-based high-tech company, was invited to present its advancements in foundational models for autonomous driving at the 2025 Conference on Computer Vision and Pattern Recognition (CVPR), the sole Chinese automotive company to receive this industry recognition.
On June 11, the CVPR 2025 was held in Nashville, Tennessee, the US. XPENG engaged in discussions with leading industry and academic experts, including Waymo, NVIDIA, UCLA, and the University of Tuebingen, to explore cutting-edge autonomous driving technologies. XPENG was the only Chinese automaker invited to participate in WAD. Past participants include Head of Tesla Autopilot, Former Tesla AI & Autopilot Lead, OpenAI Founding Member, Co-founder & CEO of Wayve.
XPENG delivered a speech titled "Scaling up Autonomous Driving via Large Foundation Models", offering peers some of the industry’s most substantial practical insights. On the same day, XPENG also announced G7, the world’s first AI-powered production car equipped with an L3-grade computing platform. XPENG introduced two key standards for next-generation L3 autonomous systems: Effective computing power exceeding 2000 TOPS and onboard deployment of "VLA (Vision-Language Action) + VLM (Vision-Language Motion) models".
The frontier of autonomous driving research is shifting to automakers with large-scale fleets and vast real-world data in this data-driven era. XPENG's participation in CVPR exemplifies how automakers are contributing practical expertise back to academic research.
About XPENG
Founded in 2014, XPENG is a leading Chinese AI-driven mobility company that designs, develops, manufactures, and markets Smart EVs, catering to a growing base of tech-savvy consumers. With the rapid advancement of AI, XPENG aspires to become a global leader in AI mobility, with a mission to drive the Smart EV revolution through cutting-edge technology, shaping the future of mobility.
To enhance the customer experience, XPENG develops its full-stack advanced driver-assistance system (ADAS) technology and intelligent in-car operating system in-house, along with core vehicle systems such as the powertrain and electrical/electronic architecture (EEA). Headquartered in Guangzhou, China, XPENG also operates key offices in Beijing, Shanghai, Silicon Valley, and Amsterdam. Its Smart EVs are primarily manufactured at its facilities in Zhaoqing and Guangzhou, Guangdong province.
XPENG is listed at the New York Stock Exchange (NYSE: XPEV) and Hong Kong Exchange (HKEX: 9868).
For more information, please visit https://www.xpeng.com/.
Contacts:
For Media Enquiries:
XPENG PR Department
Email: pr@xiaopeng.com
Source: XPENG Motors
Photos accompanying this announcement are available at
https://www.globenewswire.com/NewsRoom/AttachmentNg/e0fd9fa8-680e-4174-9393-738afe8ec35f
https://www.globenewswire.com/NewsRoom/AttachmentNg/2a99905d-51bf-478e-b337-db41da280ec7

Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
