pxrobotics Landing Page

Robotics Guide

Explore a wide range of Robots and more Choosing the Right Subject for you
pxrobotics Service
>

P X Robotics

This website uses cookies to ensure you get the best experience on our website. By clicking "Accept", you agree to our use of cookies. Learn more

Welcome

Autonomous Robot with LiDAR Mapping Simulation

Start Robot Mapping Simulator Robotics News

🤖 Robotics News

Latest posts about robotics

"robotics" - Google News

3 Robotics Stocks to Buy Now Ahead of a White House Game-Changer - Barchart.com
December 4, 2025, 6:51 pm
3 Robotics Stocks to Buy Now Ahead of a White House Game-Changer  Barchart.com
After AI push, Trump administration is now looking to robots - Politico
December 3, 2025, 3:17 pm
After AI push, Trump administration is now looking to robots  Politico
MIT researchers “speak objects into existence” using AI and robotics - MIT News
December 5, 2025, 3:00 pm
MIT researchers “speak objects into existence” using AI and robotics  MIT News
Market-Crushing AI Momentum: Top Robotics Technology Stocks Leading the 2026 Growth Trend - Seeking Alpha
December 5, 2025, 10:00 am
Market-Crushing AI Momentum: Top Robotics Technology Stocks Leading the 2026 Growth Trend  Seeking Alpha
Walmart's AI Robotics Maker Is Sinking For This Reason After Big Run - Investor's Business Daily
December 4, 2025, 9:08 pm
Walmart's AI Robotics Maker Is Sinking For This Reason After Big Run  Investor's Business Daily

IEEE Spectrum

Video Friday: Biorobotics Turns Lobster Tails Into Gripper
December 5, 2025, 5:30 pm


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

EPFL scientists have integrated discarded crustacean shells into robotic devices, leveraging the strength and flexibility of natural materials for robotic applications.

[ EPFL ]

Finally, a good humanoid robot demo!

Although having said that, I never trust videos demos where it works really well once, and then just pretty well every other time.

[ LimX Dynamics ]

Thanks, Jinyan!

I understand how these structures work, I really do. But watching something rigid extrude itself from a flexible reel will always seem a little magical.

[ AAAS ]

Thanks, Kyujin!

I’m not sure what “industrial grade” actually means, but I want robots to be “automotive grade,” where they’ll easily operate for six months or a year without any maintenance at all.

[ Pudu Robotics ]

Thanks, Mandy!

When you start to suspect that your robotic EV charging solution costs more than your car.

[ Flexiv ]

Yeah uh if the application for this humanoid is actually making robot parts with a hammer and anvil, then I’d be impressed.

[ EngineAI ]

Researchers at Columbia Engineering have designed a robot that can learn a human-like sense of neatness. The researchers taught the system by showing it millions of examples, not teaching it specific instructions. The result is a model that can look at a cluttered tabletop and rearrange scattered objects in an orderly fashion.

[ Paper ]

Why haven’t we seen this sort of thing in humanoid robotics videos yet?

[ HUCEBOT ]

While I definitely appreciate in-the-field testing, it’s also worth asking to what extent your robot is actually being challenged by the in-the-field field that you’ve chosen.

[ DEEP Robotics ]

Introducing HMND 01 Alpha Bipedal — autonomous, adaptive, designed for real-world impact. Built in 5 months, walking stably after 48 hours of training.

[ Humanoid ]

Unitree says that “this is to validate the overall reliability of the robot” but I really have to wonder how useful this kind of reliability validation actually is.

[ Unitree ]

This University of Pennsylvania GRASP on Robotics Seminar is by Jie Tan from Google DeepMind, on “Gemini Robotics: Bringing AI into the Physical World.”

Recent advancements in large multimodal models have led to the emergence of remarkable generalist capabilities in digital domains, yet their translation to physical agents such as robots remains a significant challenge. In this talk, I will present Gemini Robotics, an advanced Vision-Language-Action (VLA) generalist model capable of directly controlling robots. Furthermore, I will discuss the challenges, learnings and future research directions on robot foundation models.

[ University of Pennsylvania GRASP Laboratory ]

MIT’s AI Robotics Lab Director Is Building People-Centered Robots
December 3, 2025, 7:00 pm


Daniela Rus has spent her career breaking barriers—scientific, social, and material—in her quest to build machines that amplify rather than replace human capability. She made robotics her life’s work, she says, because she understood it was a way to expand the possibilities of computing while enhancing human capabilities.

“I like to think of robotics as a way to give people superpowers,” Rus says. “Machines can help us reach farther, think faster, and live fuller lives.”

Daniela Rus

Employer MIT

Job title

Professor of electrical and computer engineering and computer science; director of the MIT Computer Science and Artificial Intelligence Laboratory

Member grade

Fellow

Alma maters

University of Iowa, in Iowa City; Cornell

Her dual missions, she says, are to make technology humane and to make the most of the opportunities afforded by life in the United States. The two goals have fueled her journey from a childhood living under a dictatorship in Romania to the forefront of global robotics research.

Rus, who is director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), is the recipient of this year’s IEEE Edison Medal, which recognizes her for “sustained leadership and pioneering contributions in modern robotics.”

An IEEE Fellow, she describes the recognition as a responsibility to further her work and mentor the next generation of roboticists entering the field.

The Edison Medal is the latest in a string of honors she has received. In 2017 she won an Engelberger Robotics Award from the Robotic Industries Association. The following year, she was honored with the Pioneer in Robotics and Automation Award by the IEEE Robotics and Automation Society. The society recognized her again in 2023 with its IEEE Robotics and Automation Technical Field Award.

From Romania to Iowa

Rus was born in Cluj-Napoca, Romania, during the rule of dictator Nicolae Ceausescu. Her early life unfolded in a world defined by scarcity—rationed food, intermittent electricity, and a limited ability to move up or out. But she recalls that, amid the stifling insufficiencies, she was surrounded by an irrepressible warmth and intellectual curiosity—even when she was making locomotive screws in a state-run factory as part of her school’s curriculum.

“Life was hard,” she says, “but we had great teachers and strong communities. As a child, you adapt to whatever is around you.”

Her father, Teodor, was a computer scientist and professor, and her mother, Elena, was a physicist.

In 1982, when she was 19, Rus’s father emigrated to the United States to join the faculty at the University of Iowa, in Iowa City. It was an act of courage and conviction. Within a year, Daniela and her mother joined him there.

“He wanted the freedom to think, to publish, to explore ideas,” Rus says. “And I reaped the benefits of being free from the limitations of our homeland.”

America’s open horizons were intoxicating, she says.

A lecture that changed everything

Rus decided to pursue a degree at her father’s university, where her life changed direction, she says. One afternoon, John Hopcroft—a Turing Award–winning Cornell computer scientist renowned for his work on algorithms and data structures—gave a talk on campus. His message was simple but electrifying, Rus says: Classical computer science had been solved. The next frontier, Hopcroft declared, was computations that interact with the messy physical world.

For Rus, the idea was a revelation.

“It was as if a door had opened,” she says. “I realized the future of computing wasn’t just about logic and code; it was about how machines can perceive, move, and help us in the real world.”

After the lecture, she introduced herself to Hopcroft and told him she wanted to learn from him. Not long after earning her bachelor’s degree in computer science and mathematics in 1985, she applied to get a master’s degree at Cornell, where Hopcroft became her graduate advisor. Rus developed algorithms there for dexterous robotic manipulation—teaching machines to grasp and move objects with precision. She earned her master’s in computer science in 1990, then stayed on at Cornell to work toward a doctorate.

“I like to think of robotics as a way to give people superpowers. Machines can help us reach farther, think faster, and live fuller lives.”

In 1993 she earned her Ph.D. in computer science, then took a position as an assistant professor of computer science at Dartmouth College, in Hanover, N.H. She founded the college’s robotics laboratory and expanded her work into distributed robotics. She developed teams of small robots that cooperated to perform tasks such as ensuring products in warehouses are correctly gathered to fulfill orders, get packaged safely, and are routed to their respective destinations efficiently.

Despite a lack of traditional machine shop facilities for fabrication on the Hanover campus, Rus found a way. She pioneered the use of 3D printing to rapidly prototype and build robots.

In 2003 she left Dartmouth to become a professor in the electrical engineering and computer science department at MIT.

The robotics lab she created at Dartmouth moved with her to MIT and became known as the Distributed Robotics Laboratory (DRL). In 2012 she was named director of MIT’s Computer Science and Artificial Intelligence Laboratory, the school’s largest interdisciplinary lab, with 60 research groups including the DRL. She also continues to serve as the DRL’s principal investigator.

The science of physical intelligence

Rus now leads pioneering research at the intersection of AI and robotics, a field she calls physical intelligence. It’s “a new form of intelligent machine that can understand dynamic environments, cope with unpredictability, and make decisions in real time,” she says.

Her lab builds soft-body robots inspired by nature that can sense, adapt, and learn. They are AI-driven systems that passively handle tasks—such as self-balancing and complex articulation similar to that done by the human hand—because their shape and materials minimize the need for heavy processing.

Such machines, she says, someday will be able to navigate different environments, perform useful functions without external control, and even recover from disturbances to their route planning. Researchers also are exploring ways to make them more energy-efficient.

One prototype developed by Rus’s team is designed to retrieve foreign objects from the body, including batteries swallowed by children. The ingestible robots are artfully folded, similar to origami, so they are small enough to be swallowed. Embedded magnetic materials allow doctors to steer the soft robots and control their shape. Upon arriving in the stomach, a soft bot can be programmed to wrap around a foreign object and guide it safely out of the patient’s body.

CSAIL researchers also are working on small robots that can carry a medication and release it at a specific area within the digestive tract, bypassing the stomach acid known to diminish some drugs’ efficacy. Ingestible robots also could patch up internal injuries or ulcers. And because they’re made from digestible materials such as sausage casings and biocompatible polymers, the robots can perform their assigned tasks and then get safely absorbed by the body, she says.

Health care isn’t the only application on the horizon for such AI-driven technologies. Robots with physical intelligence might someday help firefighters locate people trapped in burning buildings, find miners after a cave-in, and provide valuable situational awareness information to emergency response teams in the aftermath of natural disasters, Rus says.

“What excites me is the possibility of giving people new powers,” she says. “Machines that can think and move safely in the physical world will let us extend human reach—at work, at home, in medicine … everywhere.”

To make such a vision a reality, she has expanded her technical interests to include several complementary lines of research.

She’s working on self-reconfiguring and modular robots such as MIT’s M-Blocks and NASA’s SuperBots, which can attach, detach, and rearrange themselves to form shapes suited for different actions such as slithering, climbing, and crawling.

With networked robots—including those Amazon uses in its warehouses—thousands of machines can operate as a large adaptive system. The machines communicate continuously to divide tasks, avoid collisions, and optimize package routing.

Rus’s team also is making advances in human-robot interaction, such as reading brainwave activity and interpreting sign language through a smart glove.

To further her plan of putting all the computerized smarts the robots need within their physical bodies instead of in the cloud, she helped found Liquid AI in 2023. The company, based in Cambridge, Mass., develops liquid neural networks, inspired by the simple brains of worms, that can learn and adapt continuously. The word liquid in this case refers to the adaptability, flexibility, and dynamic nature of the team’s model architecture. It can change shape and adapt to new data inputs, and it fits within constraints imposed by the hardware in which it’s contained, she says.

Finding community in IEEE

Rus joined IEEE at one of its robotics conferences when she was a graduate student.

“I think I signed up just to get the student discount,” she says with a laugh. “But IEEE turned out to be the place where my community lived.”

She credits the organization’s conferences, journals, and collaborative spirit with shaping her professional growth.

“The exchange of ideas, the chance to test your thinking against others—it’s invaluable,” she says. “It’s how our field moves forward.”

Rus continues to serve on IEEE panels and committees, mentoring the next generation of roboticists.

“IEEE gave me a platform,” Rus says. “It taught me how to communicate, how to lead, and how to dream bigger.”

Living the American dream

Looking back, Rus sees her story as a testament to unforeseen possibilities.

“When I was growing up in Romania, I couldn’t even imagine living in America,” she says. “Now I’m here, working with brilliant students, building robots that help people, and trying to make a difference. I feel like I’m living the American dream.”

In a nod to a memorable song from the Broadway musical Hamilton, Rus echoes Alexander Hamilton’s determination to make the most of his opportunities, saying, “I don’t ever want to throw away my shot.”

Video Friday: Disney’s Robotic Olaf Makes His Debut
November 29, 2025, 4:30 pm


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

SOSV Robotics Matchup: 1–5 December 2025, ONLINEICRA 2026: 1–5 June 2026, VIENNA

Enjoy today’s videos!

Step behind the scenes with Walt Disney Imagineering Research & Development and discover how Disney uses robotics, AI, and immersive technology to bring stories to life! From the brand new self-walking Olaf in World of Frozen and BDX Droids to cutting-edge attractions like Millennium Falcon: Smugglers Run, see how magic meets innovation.

[ Disney Experiences ]

We just released a new demonstration of Mentee’s V3 humanoid robots completing a real world logistics task together. Over an uninterrupted 18-minute run, the robots autonomously move 32 boxes from eight piles to storage racks of different heights. The video shows steady locomotion, dexterous manipulation, and reliable coordination throughout the entire task.

And there’s an uncut 18 minute version of this at the link.

[ MenteeBot ]

Thanks, Yovav!

This video contains graphic depictions of simulated injuries. Viewer discretion is advised.

In this immersive overview, guided by the DARPA Triage Challenge program manager, retired Army Col. Jeremy C. Pamplin, M.D., you’ll experience how teams of innovators, engineers, and DARPA are redefining the future of combat casualty care. Be sure to look all around! Check out competition runs, behind-the-scenes of what it takes to put on a DARPA Challenge, and glimpses into the future of lifesaving care.

Those couple of minutes starting at 6:50 with the human medic and robotic teaming was particularly cool.

[ DARPA ]

You don’t need to build a humanoid robot if you can just make existing humanoids a lot better.

I especially love 0:45 because you know what? Humanoids should spend more time sitting down, for all kinds of reasons. And of course, thank you for falling and getting up again, albeit on some of the squishiest grass on the planet.

[ Flexion ]

“Human-in-the-Loop Gaussian Splatting” wins best paper title of the week.

[ Paper ] via [ IEEE Robotics and Automation Letters in IEEE Xplore ]

Scratch that, “Extremum Seeking Controlled Wiggling for Tactile Insertion” wins best paper title of the week.

[ University of Maryland PRG ]

The battery swapping on this thing is... Unfortunate.

[ LimX Dynamics ]

To push the boundaries of robotic capability, researchers in the Department of Mechanical Engineering at Carnegie Mellon University in collaboration with The University of Washington and Google Deepmind, have developed a new tactile sensing system that enables four-legged robots to carry unsecured, cylindrical objects on their backs. This system, known as LocoTouch, features a network of tactile sensors that spans the robot’s entire back. As an object shifts, the sensors provide real-time feedback on its position, allowing the robot to continuously adjust its posture and movement to keep the object balanced.

[ Carnegie Mellon University ]

This robot is in more need of googly eyes than any other robot I’ve ever seen.

[ Zarrouk Lab ]

DPR Construction has deployed Field AI’s autonomy software on a quadruped robot at the company’s job site in Santa Clara, CA, to greatly improve its daily surveying and data collection processes. By automating what has traditionally been a very labor intensive and time consuming process, Field AI is helping the DPR team operate more efficiently and effectively, while increasing project quality.

[ FieldAI ]

In our second episode of AI in Motion, our host, Waymo AI researcher Vincent Vanhoucke, talks with a robotics startup founder Sergey Levine, who left a career in academic research to build better robots for the home and workplace.

[ Waymo ]

For This Engineer, Taking Deep Dives Is Part of the Job
November 27, 2025, 1:00 pm


Early in Levi Unema’s career as an electrical engineer, he was presented with an unusual opportunity. While working on assembly lines at an automotive parts supplier in 2015, he got a surprise call from his high-school science teacher that set him off on an entirely new path: piloting underwater robots to explore the ocean’s deepest abysses.

That call came from Harlan Kredit, a nationally renowned science teacher and board member of a Rhode Island-based nonprofit called the Global Foundation for Ocean Exploration (GFOE). The organization was looking for an electrical engineer to help design, build, and pilot remotely operated vehicles (ROVs) for the U.S. National Oceanic and Atmospheric Administration.

Levi Unema

Employer

Deep Exploration Solutions

Occupation

ROV engineer

Education

Bachelor’s degree in electrical engineering, Michigan Technological University

This was an exciting break for Unema, a Washington state native who had grown up tinkering with electronics and exploring the outdoors. Unema joined the team in early 2016 and has since helped develop and operate deep-sea robots for scientific expeditions around the globe.

The GFOE’s contract with NOAA expired in July, forcing the engineering team to disband. But soon after, Unema teamed up with four former colleagues to start their own ROV consultancy, called Deep Exploration Solutions, to continue the work he’s so passionate about.

“I love the exploration and just seeing new things every day,” he says. “And the engineering challenges that go along with it are really exciting, because there’s a lot of pressure down there and a lot of technical problems to solve.”

Nature and Technology

Unema’s fascination with electronics started early. Growing up in Lynden, Wash., he took apart radios, modified headphones, and hacked together USB chargers from AA batteries. “I’ve always had to know how things work,” he says. He was also a Boy Scout, and much of his youth was spent hiking, camping, and snowboarding.

That love of both technology and nature can be traced back, at least in part, to his parents—his father was a civil engineer, and his mother was a high-school biology teacher. But another major influence growing up was Kredit, the science teacher who went on to recruit him. (Kredit was also a colleague of Unema’s mother.)

Kredit has won numerous awards for his work as an educator, including the Presidential Award for Excellence in Science Teaching in 2004. Like Unema, he also shares a love for the outdoors as Yellowstone National Park’s longest-serving park ranger. “He was an excellent science teacher, very inspiring,” says Unema.

When Unema graduated high school in 2010, he decided to enroll at his father’s alma mater, Michigan Technological University, to study engineering. He was initially unsure what discipline to follow and signed up for the general engineering course, but he quickly settled on electrical engineering.

A summer internship at a steel mill run by the multinational corporation ArcelorMittal introduced Unema to factory automation and assembly lines. After graduating in 2014 he took a job at Gentex Corp. in Zeeland, Mich., where he worked on manufacturing systems and industrial robotics.

Diving Into Underwater Robotics

In late 2015, he got the call from Kredit asking if he’d be interested in working on underwater robots for GFOE. The role involved not just engineering these systems, but also piloting them. Taking the plunge was a difficult choice, says Unema, as he’d just been promoted at Gentex. But the promise of travel combined with the novel engineering challenges made it too good an opportunity to turn down.

Building technology that can withstand the crushing pressure at the bottom of the ocean is tough, he says, and you have to make trade-offs between weight, size, and cost. Everything has to be waterproof, and electronics have to be carefully isolated to prevent them from grounding on the ocean floor. Some components are pressure-tolerant, but most must be stored in pressurized titanium flasks, so the components must be extremely small to minimize the size of the metallic housing.

Unema conducts predive checks from the Okeanos Explorer’s control room. Once the ROV is launched, scientists will watch the camera feeds and advise his team where to direct the vehicle.Art Howard

“You’re working very closely with the mechanical engineer to fit the electronics in a really small space,” he says. “The smaller the cylinder is, the cheaper it is, but also the less mass on the vehicle. Every bit of mass means more buoyancy is required, so you want to keep things small, keep things light.”

Communications are another challenge. The ROVs rely on several kilometers of cable containing just three single-mode optical fibers. “All the communication needs to come together and then go up one cable,” Unema says. “And every year new instruments consume more data.”

He works exclusively on ROVs that are custom made for scientific research, which require smoother control and considerably more electronics and instrumentation than the heavier-duty vehicles used by the oil and gas industry. “The science ones are all hand-built, they’re all quirky,” he says.

Unema’s role spans the full life cycle of an ROV’s design, construction, and operation. He primarily spends winters upgrading and maintaining vehicles and summers piloting them on expeditions. At GFOE, he mainly worked on two ROVs for NOAA called Deep Discoverer and Seirios, which operate from the ship Okeanos Explorer. But he has also piloted ROVs for other organizations over the years, including the Schmidt Ocean Institute and the Ocean Exploration Trust.

Unema’s new consultancy, Deep Exploration Solutions, has been given a contract to do the winter maintenance on the NOAA ROVs, and the firm is now on the lookout for more ROV design and upgrade work, as well as piloting jobs.

An Engineer’s Life at Sea

On expeditions, Unema is responsible for driving the robot. He follows instructions from a science team that watches the ROV’s video feed to identify things like corals, sponges, or deepwater creatures that they’d like to investigate in more detail. Sometimes he will also operate hydraulic arms to sample particularly interesting finds.

In general, the missions are aimed at discovering new species and mapping the range of known ones, says Unema. “There’s a lot of the bottom of the ocean where we don’t know anything about it,” he says. “Basically every expedition there’s some new species.”

This involves being at sea for weeks at a time. Unema says that life aboard ships can be challenging—many new crew members get seasick, and you spend almost a month living in close quarters with people you’ve often never met before. But he enjoys the opportunity to meet colleagues from a wide variety of backgrounds who are all deeply enthusiastic about the mission.

“It’s like when you go to scout camp or summer camp,” he says. “You’re all meeting new people. Everyone’s really excited to be there. We don’t know what we’re going to find.”

Unema also relishes the challenge of solving engineering problems with the limited resources available on the ship. “We’re going out to the middle of the Pacific,” he says. “Things break, and you’ve got to fix them with what you have out there.”

If that sounds more exciting than daunting, and you’re interested in working with ROVs, Unema’s main advice is to talk to engineers in the field. It’s a small but friendly community, he says, so just do your research to see what opportunities are available. Some groups, such as the Ocean Exploration Trust, also operate internships for college students to help them get experience in the field.

And Unema says there are very few careers quite like it. “I love it because I get to do all aspects of engineering—from idea to operations,” he says. “To be able to take something I worked on and use it in the field is really rewarding.”

This article appears in the December 2025 print issue as “Levi Unema.”

Remote Robotics Could Widen Access to Stroke Treatment
November 24, 2025, 2:15 pm


When treating strokes, every second counts. But for patients in remote areas, it may take hours to receive treatment.

The standard treatment for a common type of stroke, caused by large clots interrupting blood flow to the brain, is a procedure called endovascular thrombectomy, or EVT. During the procedure, an experienced surgeon pilots catheters through blood vessels to the blockage, accessed through a major channel such as the femoral artery in the groin. This is typically aided by X-ray imaging, which shows the position of blood vessels.

“Good outcomes are directly associated with early treatment,” says Cameron Williams, a neurologist at the University of Melbourne and fellow with the Australian Stroke Alliance. In fact, “time is brain” is a common refrain in stroke treatment. While blood flow is stopped, about 2 million neurons die each minute. Over an hour, that adds up to 3.6 years of typical age-related brain cell loss.

But in remote places like Darwin, in the north of Australia, this treatment isn’t available. Instead, it could take 6 hours or more and an expensive aeromedical transfer to get a patient to a medical center, says Williams. There are similar geographical challenges to stroke treatment access all over the world. Sparing a rural patient hours of transfer time to a hospital with an on-site expert could save their life, prevent disability, or preserve years of their quality of life.

That’s why there is a particular interest in the possibility of emergency stroke treatment performed remotely with the help of robotics. Machines placed in smaller population centers could connect patients to expert surgeons miles away, and shave hours off of time to treatment. Two companies have recently demonstrated their remote capabilities. In September, doctors in Toronto completed a series of increasingly distant brain angiograms, the X-ray imaging element of an EVT, eventually performing two angiograms between crosstown hospitals using the N1 system from Remedy Robotics. And in October, Sentante equipment facilitated a simulated EVT between a surgeon in Jacksonville, Fla., and a cadaver with artificial blood flow in Dundee, Scotland.

“All those stories connected is not only proof of concept. It’s coming to realization and implementation that robotic and remote interventions can be performed, and soon will be the reality for many centers in rural areas,” says Vitor Pereira, a neurosurgeon at Unity Health who performed the Toronto procedures.

Two Approaches to Remote EVT

One challenge of performing these remote procedures is maintaining strong, fast connections at large distances. “Is there a real life need to do this transatlantically? Probably not,” says Edvardas Satkauskas, CEO of Sentante. “It demonstrates the capabilities. Even this distance is feasible.” Although performing a procedure remotely introduces issues related to latency, the pace of EVT—while urgent—is not reliant on instant reactions, says Satkauskas.

Redundant connections should also be an important safeguard for dropped connections. Remedy has taken measures, for instance, to ensure that its robot monitors connection quality, and doesn’t make any harmful movements due to a poor connection, says David Bell, the company’s CEO.

Though both companies are careful about disclosing details of products and research that are still in development, there are notable differences between their approaches.

“Our device leans heavily on artificial intelligence,” says Bell. Machine learning is incorporated into how the Remedy device manipulates guide wires and creates an informational overlay atop X-ray images for remote physicians, who can control the robot with a laptop and software interface. The long-term goal is for a surgeon to be able to log on to Remedy software at short notice from a central location to interact with Remedy robots in multiple hospitals as needed.

In contrast, Sentante uses a control console meant to look and feel like the catheters and guide wires that surgeons are accustomed to manipulating in manual EVT, including force feedback that mimics the resistance they would feel in person.

“It’s very intuitive to use this,” says Ricardo Hanel, a neurosurgeon with Baptist Health in Jacksonville, who was on the piloting end of the Sentante demonstration. Naturalistic feeling in the transatlantic procedure came with reported latency of around 120 milliseconds. Hanel is also on Sentante’s medical advisory board.

Sentante has not yet implemented AI-assisted movements of its robot, though a plan is in place to capture as much training data as possible, both from images and force measurements. “As we joke, we had to build a sophisticated piece of hardware to become a software company,” says CEO Satkauskas.

The Path to Clinical Use

Hanel expressed optimism that any control system would be easily learned by surgeons.

“I think the main limitation for robotics is that you are still dependent on bedside interventionists,” says Ahmet Gunkan, an interventional radiologist at the University of Arizona, who has written about robots and endovascular interventions.

Depending on the system, these bedside assistants might be responsible for a variety of tasks related to preparing and communicating with the patient, sterilizing and preparing equipment, loading step-specific parts, and repositioning X-ray or robotic equipment. Both CEOs note that while proper training will be essential, there are ways to reduce the burden on health care providers at the patient site.

In the case of remote operations, “it was important to us that the robot could do the entire thing,” says Bell. Remedy’s system has been designed to handle as much of the procedure as possible, and streamline moments when bedside human interaction is necessary. For example, even since the older version used in Toronto, changes have been made to maintain a clean line of communication between bedside and remote clinicians, facilitated by the Remedy system, says Bell.

A team at St. Michael’s Hospital in Toronto performs, for the world’s first time, a robot-assisted neurovascular procedure remotely over a network, on 28 August 2025. Katie Cooper and Kevin Van Paassen/Unity Health Toronto

Though remote EVT is a high priority, systems capable of the procedure may first be approved for other endovascular procedures performed locally. The hope is that precision robotics leads to better patient outcomes, whether the surgeon is in the next room or the next county.

Remedy has a clinical trial planned in 2026 for on-premise neurointerventions, and has partnered with the Australian Stroke Alliance to distribute its N1 system and conduct a future clinical trial for remote procedures. Eventually the robot could be used to treat as many as 30 different conditions, says Bell.

Satkauskas views Sentante’s equipment as a flexible platform for endovascular procedures throughout the body, which could help keep bedside clinicians familiar with the device. The system may go to market in the EU next year for peripheral vascular interventions, which restore blood flow to the limbs, and it has a breakthrough device designation from the U.S. FDA for remote stroke treatment.

There are other players in the space. For example, an early telerobotic effort from a company called Corindus is still ongoing after the company’s acquisition by Siemens in 2019. And Pereira notes that Xcath has also demonstrated a long-distance simulated EVT and looks to perform local robotic EVT with live patients soon.

“I think it’s an exciting time to be a neurointerventionalist,” says Hanel.

Blog – Hackaday

Emulate ROMs at 12MHz With Pico2 PIO
December 6, 2025, 6:00 pm
Nothing lasts forever, and that includes the ROMs required to make a retrocomputer run. Even worse, what if you’re rolling your own firmware? Period-appropriate EPROMs and their programmers aren’t always …read more
Something New Every Day, Something Relevant Every Week?
December 6, 2025, 3:00 pm
The site is called Hackaday, and has been for 21 years. But it was only for maybe the first half-year that it was literally a hack a day. By the …read more
Electronic Dice Built The Old Fashioned Way
December 6, 2025, 12:00 pm
If you wanted to build an electronic dice, you might grab an Arduino and a nice OLED display to whip up something fancy. You could even choose an ESP32 and …read more
Sudo Clean Up My Workbench
December 6, 2025, 9:00 am
[Engineezy] might have been watching a 3D printer move when inspiration struck: Why not build a robot arm to clean up his workbench? Why not, indeed? Well, all you need …read more
Blue Hedgehog, Meet Boing Ball: Can Sonic Run on Amiga?
December 6, 2025, 6:00 am
The Amiga was a great game system in its day, but there were some titles it was just never going to get. Sonic the Hedgehog was one of them– SEGA …read more

Robohub

Robot Talk Episode 136 – Making driverless vehicles smarter, with Shimon Whiteson
December 5, 2025, 1:10 pm
Claire chatted to Shimon Whiteson from Waymo about machine learning for autonomous vehicles. Shimon Whiteson is a Professor of Computer Science at the University of Oxford and a Senior Staff Research Scientist at Waymo UK. His research focuses on deep reinforcement learning and imitation learning, with applications in robotics and video games. He completed his […]
Teaching robot policies without new demonstrations: interview with Jiahui Zhang and Jesse Zhang
December 4, 2025, 10:47 am
The ReWiND method, which consists of three phases: learning a reward function, pre-training, and using the reward function and pre-trained policy to learn a new language-specified task online. In their paper ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations, which was presented at CoRL 2025, Jiahui Zhang, Yusen Luo, Abrar Anwar, Sumedh A. Sontakke, […]
Why companies don’t share AV crash data – and how they could
December 1, 2025, 11:08 am
Anton Grabolle / Autonomous Driving / Licenced by CC-BY 4.0 By Susan Kelley Autonomous vehicles (AVs) have been tested as taxis for decades in San Francisco, Pittsburgh and around the world, and trucking companies have enormous incentives to adopt them. But AV companies rarely share the crash- and safety-related data that is crucial to improving […]
Robot Talk Episode 135 – Robot anatomy and design, with Chapa Sirithunge
November 28, 2025, 1:39 pm
Claire chatted to Chapa Sirithunge from University of Cambridge about what robots can teach us about human anatomy, and vice versa. Chapa Sirithunge is a Marie Sklodowska-Curie fellow in robotics at the University of Cambridge. She has an undergraduate degree and PhD  in Electrical Engineering from the University of Moratuwa. Before joining the University of […]
Learning robust controllers that work across many partially observable environments
November 27, 2025, 10:03 am
In intelligent systems, applications range from autonomous robotics to predictive maintenance problems. To control these systems, the essential aspects are captured with a model. When we design controllers for these models, we almost always face the same challenge: uncertainty. We’re rarely able to see the whole picture. Sensors are noisy, models of the system are […]

Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics

Apple's Johny Srouji could continue the company's executive exodus, according to report
December 6, 2025, 8:07 pm

Apple's Johny Srouji may be the latest company executive to seek greener pastures, according to a report from Bloomberg. The report said that Srouji, Apple's senior vice president of hardware technologies, told Tim Cook that he is "seriously considering leaving in the near future."

While the report didn't mention if Srouji has another job lined up, Bloomberg's sources claimed that he wants to join another company if he leaves Apple. Srouji joined the company in 2008 to develop Apple's first in-house system-on-a-chip and eventually led the transition to Apple silicon.

If Srouji leaves Apple, he would be the latest in a string of departures of longtime execs. At the start of the month, Apple announced that John Giannandrea, the company's senior vice president for machine learning and AI strategy, would be retiring from his role in spring 2026. A couple of days later, Bloomberg reported that the company's head of interface design, Alan Dye, would be leaving for a role at Meta. Adding to those exits, Apple also revealed that Kate Adams, who has been Apple's general counsel since 2017, and Lisa Jackson, vice president for Environment, Policy, and Social Initiatives, will both be leaving in early 2026.

The shakeup at the executive level comes after Bloomberg's Mark Gurman previously reported that Cook may not be preparing for his own departure as CEO next year. Gurman's prediction counters a report from the Financial Times that claimed that Apple was accelerating succession plans for Cook with an expected stepping down sometime next year.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/apples-johny-srouji-could-continue-the-companys-executive-exodus-according-to-report-200750252.html?src=rss
Waymo's robotaxi fleet is being recalled again, this time for failing to stop for school buses
December 6, 2025, 7:02 pm

To prevent its robotaxi fleet from passing stopped school buses, Waymo is issuing another software recall in 2025. While it's not a traditional recall that pulls vehicles from the road, Waymo is voluntarily updating software for its autonomous fleet in response to an investigation from the National Highway Traffic Safety Administration. According to Waymo, the recall will be filed with the federal agency early next week.

Mauricio Peña, Waymo's chief safety officer, said in a statement that Waymo sees far fewer crashes involving pedestrians than human drivers, but that the company knows when "our behavior should be better."

"As a result, we have made the decision to file a voluntary software recall with NHTSA related to appropriately slowing and stopping in these scenarios," Peña said in a statement to multiple news outlets. "We will continue analyzing our vehicles’ performance and making necessary fixes as part of our commitment to continuous improvement."

According to the NHTSA investigation, some Waymo autonomous vehicles were seen failing to stop for school buses that had their stop signs and flashing lights deployed. The federal agency said in the report that there were instances of Waymo cars driving past stopped school buses in Atlanta and Austin, Texas.

Earlier this year, Waymo issued another software recall after some of its robotaxi fleet were seen hitting gates, chains, and similar objects. Last year, Waymo also filed two other software recalls, one of which addressed a fleet vehicle crashing into a telephone pole and another correcting how two separate robotaxis hit the same exact pickup truck that was being towed.

This article originally appeared on Engadget at https://www.engadget.com/transportation/waymos-robotaxi-fleet-is-being-recalled-again-this-time-for-failing-to-stop-for-school-buses-190222243.html?src=rss
Meta plans to push back the debut of its next mixed reality glasses to 2027
December 6, 2025, 5:24 pm

The big reveal for Meta's next mixed reality glasses is being postponed until the first half of 2027, according to a report from Business Insider. Based on an internal memo from Maher Saba, the vice president of Meta's Reality Labs Foundation, the report said that the company's project, which is codenamed "Phoenix," will no longer be scheduled for a 2026 debut.

In a separate memo, Meta execs explained that the delay would help deliver a more "polished and reliable experience." According to BI, a memo from Meta's Gabriel Aul and Ryan Cairns said this new release window is "going to give us a lot more breathing room to get this right." Meta hasn't publicly revealed many details about its Phoenix project, but The Information previously reported that it would feature a goggle-like form factor with an external power source, similar to how the Apple Vision Pro is attached to a battery pack.

In the memo from Saba, BI reported that Meta is also working on a "limited edition" wearable with the codename "Malibu 2." Yesterday, Meta announced its acquisition of Limitless, a startup that recently developed an AI wearable called Pendant. Even though Meta's current product portfolio is dominated by smart glasses and VR headsets, the Limitless acquisition and Malibu 2 project could hint at the company's plans to expand its offerings.

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/meta-plans-to-push-back-the-debut-of-its-next-mixed-reality-glasses-to-2027-172437374.html?src=rss
Engadget review recap: Dell 16 Premium, Nikon ZR, Ooni Volt 2 and more
December 6, 2025, 1:00 pm

We’ve slept off our collective turkey coma and returned to the review lab here at Engadget. Our team may also be in full CES prep mode, but we’ve got a few more devices to get off or our desks before 2025 is over. Catch up on all of the reviews you might have missed over the last few weeks — a perfect activity for a lazy December weekend.

Dell 16 Premium

There’s no denying the design of the Dell 16 Premium makes the laptop live up to its name. Unfortunately, all of that polish leads to some issues: a high price and hampered usability. “The more I looked at the Dell 16 Premium's beautiful facade, the more I wanted something... more,” senior reporter Devindra Hardawar wrote. “It needs more usable ports, like HDMI and a full-sized SD card reader. It needs more useful function keys that are visible in bright light — and also stay in one place — so I can touch type more easily. And for the love of god, just give up on the invisible trackpad.”

DJI Osmo Action 6

DJI’s drone business in the US faces an uncertain future, and the company’s action cams could be swept up in the ordeal as well. Thankfully, our contributing reporter Steve Dent resides in the EU where he observed first hand the Osmo Action 6’s superior low light performance and battery life. “With a bigger sensor and larger aperture than the competition, DJI’s Action 6 is now the best action cam on the market for night shooting, delivering clean, sharp video with better stabilization than rivals,” he said. “It’s also ideal for users who output to both YouTube and TikTok.”

Nikon ZR

In keeping with the video theme, Steve also spent time testing the Nikon ZR. While this is primarily a model for shooting video, it benefits from the addition of RED RAW, excellent autofocus and more. “With the ZR, Nikon has shown that it’s finally catching up to and even surpassing its rivals for content creation,” he explained. “Whether you’re doing social media, YouTube, documentaries or even film production, this camera is versatile and powerful with few compromises.”

Ooni Volt 2

The Ooni Volt brought the company’s popular brand of pizza making indoors for the first time, but that model wasn’t without it faults. Now Ooni is back with the Volt 2, and the completely overhauled design is a big upgrade over the original. “It’s easier to use for all skill levels thanks to its clearer controls and large display,” I explained. “Presets work well, but they can also serve as a starting point for further recipe refinement for experienced users. And the pizza — my goodness, the pizza is consistently restaurant quality (or better) across a range of styles.”

Antigravity A1

Insta360’s spin-off Antigravity is now shipping its first drone and our UK bureau chief Mat Smith has already flown it. The A1 comes with a controller and FPV headset to assist with the piloting, but the mix of unique features and crisp video (in good conditions) is also laudable. “The intuitive controls and ability to look all around you make it unlike anything else currently available,” he said. “It’s a delightful introduction to drones, FPV or otherwise, but a shame that software issues marred my tests.”

Other recent reviews

On the gaming front, Mat spent some time with Final Fantasy Tactics: The Ivalice Chronicles while deputy editor Nathan Ingraham put Metroid Prime 4 through its paces. Contributor Tim Stevens stepped back in time with the Analogue 3D to revisit some Nintendo 64 classics after getting behind the wheel of the 2025 Porsche Macan Electric.

This article originally appeared on Engadget at https://www.engadget.com/engadget-review-recap-dell-16-premium-nikon-zr-ooni-volt-2-and-more-130000527.html?src=rss
A Marvel beat-'em-up, long-awaited survival horror and other new indie games worth checking out
December 6, 2025, 12:00 pm

Welcome to our latest roundup of what's going on in the indie game space. A bunch of titles that are arriving very late to make it into game of the year conversations debuted this week, and we learned some new details about upcoming projects, such as a release date for a rad-looking arena shooter called Don't Stop, Girlypop.

Marvel Cosmic Invasion is one of the higher-profile indies to hit consoles and PC this week. It's from Tribute Games and publisher Dotemu, the same pair that brought us Teenage Mutant Ninja Turtles: Shredder's Revenge. Cosmic Invasion largely draws from the same playbook: it's also a retro-style side-scrolling beat-'em-up with a look that apes the Marvel animated shows from the '90s. 

It's an enjoyable enough game, largely thanks to the variety of characters and how differently they play. Captain America is one of my favorites. Each character has a secondary move (often a ranged attack) to go with their basic melee strikes, and Cap's one has no ammo or cooldown. I never grew tired of spamming his shield projectile attack and knocking enemies off the screen.

I really enjoyed playing as She-Hulk too. Her secondary move involves grabbing an enemy and throwing them around. She-Hulk can also toss them into the air then leap with McTominay-esque athleticism to deliver a kick and send the baddie crashing into its cohorts. The character swap system (each player chooses two and can switch between them any time) evokes tag fighting games and the co-op features work well too.

There isn't a ton of depth to Marvel Cosmic Invasion, unfortunately, but the presentation is spot on. It's out now on Steam, Nintendo Switch, Nintendo Switch 2, PlayStation 5 and Xbox Series X/S for $30. It's also on Game Pass Ultimate and PC Game Pass.

New releases

It only took 13 years from announcement to release but survival horror title Routine (from Lunar Software and publisher Raw Fury) has emerged on Steam, the Xbox PC app, Xbox One, Xbox Series X/S and Xbox Cloud. It's available on Game Pass Ultimate and PC Game Pass.

Routine offers up a slice of liminal space terror with a dash of retro-futurism. Lunar Software based the aesthetic on "how people from the 1980s might envision a believable moon base" with analogue technology.

Your mission is to explore the base and try to determine how it got to this state. Lunar wanted Routine to feel as immersive as possible, so there are no waypoint markers and you won't see a heads-up display. Instead, you have a personal data assistant that connects to wireless access points throughout the base and provides you with information about your current goals.

Here's another horror title we've been looking forward to for several years. Sleep Awake deals with things that go bump in the night. It's a first-person psychedelic horror game in which a force called The HUSH makes anyone who falls asleep vanish. So, our hero Katja and other residents of the last-known city on Earth try various ways to stay awake, but they’ll inevitably have to deal with the effects of sleep derivation. 

Sleep Awake is from Eyes Out — a studio formed by Spec Ops: The Line director Cory Davis and Nine Inch Nails guitarist Robin Finck — and publisher Blumhouse Games. It's out now on Steam, PlayStation 5 and Xbox Series X/S for $30.

How about another horror game? It's the last one we have this week, I promise. Tingus Goose has been on my radar for a while because it just looks so deeply strange. This is billed as "a cozy body horror idle game" in which you "plant seeds in patients, bounce babies for profit and ascend through surreal worlds toward riches." 

I'm glad for that description from the game's PR team, because I don't fully know what to make of the trailer. A goose emerges from a human being's torso and grows a giant neck and human fingers stick out of it and… it's all just so strange. But I kinda dig it? 

Tingus Goose is from SweatyChair and co-publishers Playsaurus and UltraPlayers. It's on Steam for $5.94 until December 8, and it will cost $7 after that.

I haven't seen anything that looks quite like Effulgence RPG before. It's a party-based RPG with a 3D ASCII art style. Here, you'll need to take out enemies to acquire better gear.

Andrei Fomin released Effulgence RPG in early access on Steam this week for $10. The solo developer is aiming to release the full version of the game in June and to add more content and quality-of-life updates in the meantime. It's not usually the kind of game that I'd normally be drawn toward, but that art style alone is cool enough to make me want to try it.

Looking for something a little more relaxing? Log Away is a cozy cabin builder from The-Mark Entertainment. There are several environments to choose from and a variety of decorations at your disposal depending on your interests. You can have a pet too, so that qualifies Log Away as this week's dog game.

I've played it a bit and found it to be quite relaxing, a soothing counter punch to the non-stop action of Cosmic Invasion. It's out now on Steam for $10, but if you buy it by December 11 you'll save a dollar and get a Christmas-themed DLC at no extra cost.

I adore Sayonara Wild Hearts with every fiber of my being and I appreciated what Simogo did with Lorelai and the Laser Eyes, even if I never stuck with it for long. I haven't played any of the studio's earlier games, though. That's something I'm planning to fix very soon now that the Simogo Legacy Collection is here.

The studio reworked all of its first seven mobile games — including Year Walk and Device 6 — and combined them into a collection that's available on Steam, Nintendo Switch and Switch 2. It costs $15 though there's a 15 percent discount until December 12. I'm very much looking forward to digging into this over the holidays.

Upcoming 

I've been very much looking forward to Don’t Stop, Girlypop! for a while. It's a movement-focused arena shooter with a Y2K aesthetic. Think of it as an anti-capitalist, hyperpop riff on games like Doom Eternal.

The demo is a lot of fun and I'm glad there's finally a release date for this game from  Funny Fintan Softworks and publisher Kwalee. It's coming to Steam on January 29.

Limbot seems like it could be a fun party game. You can play it by yourself, but having three friends join you seems like the optimal way to go. In that case, each of you will take control of one of a cardboard robot's limbs. So you'll have to coordinate to move around this papercraft world effectively and complete precision-based objectives. It sounds like a recipe for an Overcooked-style tiff between friends.

This physics-based game from Ionized Studios is coming to Steam, Xbox One and Xbox Series X/S. It's slated to arrive between April and June next year.

Polyperfect's Zlin City: Arch Moderna is a diorama city builder inspired by historical events of the 1930s and '40s and the architecture of Zlin, a town in Czechia (Czech Republic). The developers used 3D printing, photogrammetry and 3D scanning to capture the objects that are used in the game. The result is something that — at least at first glance — looks beautifully textured. 

There's no confirmed release window for Zlin City: Arch Moderna as yet. It'll be available on Steam.

This article originally appeared on Engadget at https://www.engadget.com/gaming/a-marvel-beat-em-up-long-awaited-survival-horror-and-other-new-indie-games-worth-checking-out-120000228.html?src=rss

Robohub

AAAS: Science Robotics: Table of Contents

Adaptive humanlike grasping | Science Robotics
November 26, 2025, 2:01 pm
Rich tactile embodiment enables robotic hands to perform grasping tasks with humanlike adaptability.
Sight Guide demonstrates robotics-inspired vision assistance at the Cybathlon | Science Robotics
November 26, 2025, 2:01 pm
A mobile-robotics–based navigation and perception system guided a visually impaired pilot through complex tasks at Cybathlon.
Foldable and rollable interlaced structure for deployable robotic systems | Science Robotics
November 26, 2025, 2:01 pm
A rollable structure adopting an interlaced-origami design enables fold-and-roll storage with high load capacity when deployed.
Robotic manipulation of human bipedalism reveals overlapping internal representations of space and time | Science Robotics
November 26, 2025, 2:01 pm
Robotic virtualization of standing balance shows that shared space-time maps enable body dynamics to counter sensorimotor delays.
Erratum for the Research Article “A lightweight robotic leg prosthesis replicating the biomechanics of the knee, ankle, and toe joint” by M. Tran et al. | Science Robotics
November 19, 2025, 2:01 pm
HomeScience RoboticsVol. 10, No. 108Erratum for the Research Article “A lightweight robotic leg prosthesis replicating the biomechanics of the knee, ankle, and toe joint” by M. Tran et al.Back To Vol. 10, No. 108 Full accessErrata Share on Erratum for the Research Article “A lightweight robotic leg prosthesis replicating the biomechanics of the knee, ankle, and toe joint” by M. Tran et al.Science…

robotics | TechCrunch

This Khosla-backed startup can track drones, trucks, and robotaxis, inch by inch
November 20, 2025, 6:00 pm
Point One Navigation, now valued at $230 million, is building out well beyond automotive.
Why a researcher is building robots that look and act like bats 
November 12, 2025, 5:12 pm
These palm-sized robots use ultrasound signals to navigate harsh conditions in search and rescue missions.
AI researchers ’embodied’ an LLM into a robot – and it started channeling Robin Williams
November 1, 2025, 3:00 pm
AI researchers at Andon Labs embedded various LLMs in a vacuum robot to test how ready they were to be embodied. And hilarity ensued.
Coco Robotics taps UCLA professor to lead new physical AI research lab
October 14, 2025, 3:51 pm
Coco Robotics is working toward automating its fleet of delivery robots using its millions of miles of collected data.
The world is just not quite ready for humanoids yet
October 10, 2025, 1:15 pm
Despite the amount of money being injected into the industry, humanoids won't be able to learn dexterity -- or the fine motor movements with hands -- rendering them essentially useless.

Robohub

Robot Talk Episode 136 – Making driverless vehicles smarter, with Shimon Whiteson
December 5, 2025, 1:10 pm
Claire chatted to Shimon Whiteson from Waymo about machine learning for autonomous vehicles. Shimon Whiteson is a Professor of Computer Science at the University of Oxford and a Senior Staff Research Scientist at Waymo UK. His research focuses on deep reinforcement learning and imitation learning, with applications in robotics and video games. He completed his […]
Teaching robot policies without new demonstrations: interview with Jiahui Zhang and Jesse Zhang
December 4, 2025, 10:47 am
The ReWiND method, which consists of three phases: learning a reward function, pre-training, and using the reward function and pre-trained policy to learn a new language-specified task online. In their paper ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations, which was presented at CoRL 2025, Jiahui Zhang, Yusen Luo, Abrar Anwar, Sumedh A. Sontakke, […]
Why companies don’t share AV crash data – and how they could
December 1, 2025, 11:08 am
Anton Grabolle / Autonomous Driving / Licenced by CC-BY 4.0 By Susan Kelley Autonomous vehicles (AVs) have been tested as taxis for decades in San Francisco, Pittsburgh and around the world, and trucking companies have enormous incentives to adopt them. But AV companies rarely share the crash- and safety-related data that is crucial to improving […]
Robot Talk Episode 135 – Robot anatomy and design, with Chapa Sirithunge
November 28, 2025, 1:39 pm
Claire chatted to Chapa Sirithunge from University of Cambridge about what robots can teach us about human anatomy, and vice versa. Chapa Sirithunge is a Marie Sklodowska-Curie fellow in robotics at the University of Cambridge. She has an undergraduate degree and PhD  in Electrical Engineering from the University of Moratuwa. Before joining the University of […]
Learning robust controllers that work across many partially observable environments
November 27, 2025, 10:03 am
In intelligent systems, applications range from autonomous robotics to predictive maintenance problems. To control these systems, the essential aspects are captured with a model. When we design controllers for these models, we almost always face the same challenge: uncertainty. We’re rarely able to see the whole picture. Sensors are noisy, models of the system are […]

IEEE Spectrum

Are We Testing AI’s Intelligence the Wrong Way?
December 4, 2025, 11:30 pm


When people want a clear-eyed take on the state of artificial intelligence and what it all means, they tend to turn to Melanie Mitchell, a computer scientist and a professor at the Santa Fe Institute. Her 2019 book, Artificial Intelligence: A Guide for Thinking Humans, helped define the modern conversation about what today’s AI systems can and can’t do.

Melanie Mitchell

Today at NeurIPS, the year’s biggest gathering of AI professionals, she gave a keynote titled “On the Science of ‘Alien Intelligences’: Evaluating Cognitive Capabilities in Babies, Animals, and AI.” Ahead of the talk, she spoke with IEEE Spectrum about its themes: Why today’s AI systems should be studied more like nonverbal minds, what developmental and comparative psychology can teach AI researchers, and how better experimental methods could reshape the way we measure machine cognition.

You use the phrase “alien intelligences” for both AI and biological minds like babies and animals. What do you mean by that?

Melanie Mitchell: Hopefully you noticed the quotation marks around “alien intelligences.” I’m quoting from a paper by [the neural network pioneer] Terrence Sejnowski where he talks about ChatGPT as being like a space alien that can communicate with us and seems intelligent. And then there’s another paper by the developmental psychologist Michael Frank who plays on that theme and says, we in developmental psychology study alien intelligences, namely babies. And we have some methods that we think may be helpful in analyzing AI intelligence. So that’s what I’m playing on.

When people talk about evaluating intelligence in AI, what kind of intelligence are they trying to measure? Reasoning or abstraction or world modeling or something else?

Mitchell: All of the above. People mean different things when they use the word intelligence, and intelligence itself has all these different dimensions, as you say. So, I used the term cognitive capabilities, which is a little bit more specific. I’m looking at how different cognitive capabilities are evaluated in developmental and comparative psychology and trying to apply some principles from those fields to AI.

Current Challenges in Evaluating AI Cognition

You say that the field of AI lacks good experimental protocols for evaluating cognition. What does AI evaluation look like today?

Mitchell: The typical way to evaluate an AI system is to have some set of benchmarks, and to run your system on those benchmark tasks and report the accuracy. But often it turns out that even though these AI systems we have now are just killing it on benchmarks, they’re surpassing humans, that performance doesn’t often translate to performance in the real world. If an AI system aces the bar exam, that doesn’t mean it’s going to be a good lawyer in the real world. Often the machines are doing well on those particular questions but can’t generalize very well. Also, tests that are designed to assess humans make assumptions that aren’t necessarily relevant or correct for AI systems, about things like how well a system is able to memorize.

As a computer scientist, I didn’t get any training in experimental methodology. Doing experiments on AI systems has become a core part of evaluating systems, and most people who came up through computer science haven’t had that training.

What do developmental and comparative psychologists know about probing cognition that AI researchers should know too?

Mitchell: There’s all kinds of experimental methodology that you learn as a student of psychology, especially in fields like developmental and comparative psychology because those are nonverbal agents. You have to really think creatively to figure out ways to probe them. So they have all kinds of methodologies that involve very careful control experiments, and making lots of variations on stimuli to check for robustness. They look carefully at failure modes, why the system [being tested] might fail, since those failures can give more insight into what’s going on than success.

Can you give me a concrete example of what these experimental methods look like in developmental or comparative psychology?

Mitchell: One classic example is Clever Hans. There was this horse, Clever Hans, who seemed to be able to do all kinds of arithmetic and counting and other numerical tasks. And the horse would tap out its answer with its hoof. For years, people studied it and said, “I think it’s real. It’s not a hoax.” But then a psychologist came around and said, “I’m going to think really hard about what’s going on and do some control experiments.” And his control experiments were: first, put a blindfold on the horse, and second, put a screen between the horse and the question asker. Turns out if the horse couldn’t see the question asker, it couldn’t do the task. What he found was that the horse was actually perceiving very subtle facial expression cues in the asker to know when to stop tapping. So it’s important to come up with alternative explanations for what’s going on. To be skeptical not only of other people’s research, but maybe even of your own research, your own favorite hypothesis. I don’t think that happens enough in AI.

Do you have any case studies from research on babies?

Mitchell: I have one case study where babies were claimed to have an innate moral sense. The experiment showed them videos where there was a cartoon character trying to climb up a hill. In one case there was another character that helped them go up the hill, and in the other case there was a character that pushed them down the hill. So there was the helper and the hinderer. And the babies were assessed as to which character they liked better—and they had a couple of ways of doing that—and overwhelmingly they liked the helper character better. [Editor's note: The babies were 6 to 10 months old, and assessment techniques included seeing whether the babies reached for the helper or the hinderer.]

But another research group looked very carefully at these videos and found that in all of the helper videos, the climber who was being helped was excited to get to the top of the hill and bounced up and down. And so they said, “Well, what if in the hinderer case we have the climber bounce up and down at the bottom of the hill?” And that completely turned around the results. The babies always chose the one that bounced.

Again, coming up with alternatives, even if you have your favorite hypothesis, is the way that we do science. One thing that I’m always a little shocked by in AI is that people use the word skeptic as a negative: “You’re an LLM skeptic.” But our job is to be skeptics, and that should be a compliment.

Importance of Replication in AI Studies

Both those examples illustrate the theme of looking for counter explanations. Are there other big lessons that you think AI researchers should draw from psychology?

Mitchell: Well, in science in general the idea of replicating experiments is really important, and also building on other people’s work. But that’s sadly a little bit frowned on in the AI world. If you submit a paper to NeurIPS, for example, where you replicated someone’s work and then you do some incremental thing to understand it, the reviewers will say, “This lacks novelty and it’s incremental.” That’s the kiss of death for your paper. I feel like that should be appreciated more because that’s the way that good science gets done.

Going back to measuring cognitive capabilities of AI, there’s lots of talk about how we can measure progress towards AGI. Is that a whole other batch of questions?

Mitchell: Well, the term AGI is a little bit nebulous. People define it in different ways. I think it’s hard to measure progress for something that’s not that well defined. And our conception of it keeps changing, partially in response to things that happen in AI. In the old days of AI, people would talk about human-level intelligence and robots being able to do all the physical things that humans do. But people have looked at robotics and said, “Well, okay, it’s not going to get there soon. Let’s just talk about what people call the cognitive side of intelligence,” which I don’t think is really so separable. So I am a bit of an AGI skeptic, if you will, in the best way.

Room-Size Particle Accelerators Go Commercial
December 4, 2025, 2:00 pm


Particle accelerators are usually huge structures—think of the 3.2-kilometer-long SLAC National Accelerator Laboratory in Stanford, Calif. But scientists have been hard at work trying to shrink these accelerators down by using lasers to perform the accelerating. These particle accelerators would be the size of single room, and cost much less as well. Now, a startup says its laser-powered accelerator, the first commercial version of such a device, has successfully accelerated a beam of electrons. These could first be used in radiation tests of electronics designed for satellites and spacecraft.

The concept behind the new device was first detailed in 1979. An extremely powerful and ultrashort laser pulse strikes a gas, producing a plasma. The plasma oscillates in the laser’s wake, and electrons are dragged along in the plasma’s path, accelerating them to relativistic speeds.

These “wakefield accelerators“ can generate acceleration fields up to 1,000 times as great as what conventional particle colliders are capable of. Scientists have long suggested that wakefield accelerators could shrink kilometer-scale facilities to the size of a room or smaller.

“Democratization is the name of the game for us,” says Björn Manuel Hegelich, founder and CEO of TAU Systems in Austin, Texas. “We want to get these incredible tools into the hands of the best and brightest and let them do their magic.”

TAU has now successfully generated electron beams using its commercial laser-powered wakefield accelerator. “Laser-powered accelerators have been around in academic labs for more than 20 years,” Hegelich says. “What’s most exciting is that until now, they haven’t been available as tools for industry. This result is a major step to change that paradigm and make compact accelerators useful for the world outside of academia.”

The new accelerator uses a laser supplied by the Thales Group in France, which TAU notes displays exceptional stability. “The goal here is to focus on reliability and reproducibility rather than record performance,” Hegelich says.

The first units for customers will fit in a single room. “For the future, our aim is to reduce the laser to a large cabinet size,” Hegelich says.

TAU’s first commercial accelerator will be deployed at the startup’s facility in Carlsbad, Calif., which will operate as a showroom for customers to become familiar with the technology. TAU plans to offer use of its accelerator to commercial and government customers starting in 2026.

“This first commercial system will operate in the range of 60 to 100 million electron volts (MeVs) at 100 hertz with capacity to upgrade to higher energies in the future,” Hegelich says. “We’re not rushing to the highest energies yet because there’s a lot of low-hanging fruit in the 100 to 1,000 MeV range, where conventional accelerators are too large to be of practical use.” For comparison, the linear accelerator at SLAC can achieve electron energies up to 50 billion electron volts.

How to Use a Room-Size Particle Accelerator

At 60 to 100 MeV, which requires a laser system with about 200 millijoules of pulse energy, the accelerator will be used in radiation tests of space-bound electronics. “There is a 5 to 10 times supply-demand gap for the most demanding types of testing that this technology can immediately help address,” Hegelich says. “We believe the space industry is going to play an increasingly important role in the world economy, and solving this [radiation testing] problem will significantly accelerate the industry’s growth potential.”

After that, TAU plans to increase the laser energy to about 1 joule, bringing the electron beam energy into the 100- to 300-MeV range, Hegelich says. This will allow radiation testing of thicker devices, as well as unlock high-precision, high-throughput medical imaging and “radiation therapy that’s competitive with the best proton therapy at a fraction of the cost.”

The 100- to 300-MeV range will also enable imaging of advanced 3-D microchips. “Advanced chips are the hardware underlying artificial intelligence,” Hegelich says. “AI has become extremely important to the world economy, and there’s no indication that the trend will level off anytime soon. We want to accelerate the design and manufacturing cycle to help the industry keep up with its ambitions.”

Current state-of-the-art tools for such imaging “currently take hours for high-resolution failure analysis to inform the manufacturing process, while our next-generation sources will be bright enough to take the necessary measurements in minutes or less,” Hegelich says.

A next-generation multijoule laser could help generate electron-beam energies in the 300- to 1,000-MeV range. This could drive an X-ray–free electron laser, “the brightest terrestrial sources of X-rays ever devised,” Hegelich says. These could be used in next-generation X-ray lithography “to push Moore’s Law to its fundamental limit. There’s been a lot of buzz around this topic lately, and every proposed solution requires a particle accelerator to make it happen. Our accelerators are small enough to make such proposals economically viable without the need to reinvent the modern chip fab.”

Such a powerful accelerator could also be used in fundamental science. “Campus-sized accelerators and light sources have been used as tools for some of the most cutting-edge scientific research and engineering for almost 100 years, unravelling new insights into the fundamental nature of energy and matter, chemistry, biology, and materials science,” Hegelich says. “The problem is that there are so few of them because of their size and cost. Our technology shrinks down campus-sized accelerators and light sources to room-sized or smaller. Imagine how much more we will learn as these tools become ubiquitous.”

The new accelerator will cost US $10 million and up, depending on the application and feature set. “Much of the manufacturing cost is in the ultrahigh-intensity laser that powers the accelerator,” Hegelich says. “These lasers are still scientific systems in their infancy, so there is a significant opportunity to reduce the cost and footprint as they mature.”

AI’s Wrong Answers Are Bad. Its Wrong Reasoning Is Worse
December 2, 2025, 1:00 pm


Everyone knows that AI still makes mistakes. But a more pernicious problem may be flaws in how it reaches conclusions. As generative AI is increasingly used as an assistant rather than just a tool, two new studies suggest that how models reason could have serious implications in critical areas like health care, law, and education.

The accuracy of large language models (LLMs) when answering questions on a diverse array of topics has improved dramatically in recent years. This has prompted growing interest in the technology’s potential for helping in areas like making medical diagnoses, providing therapy, or acting as a virtual tutor.

Anecdotal reports suggest users are already widely using off-the-shelf LLMs for these kinds of tasks, with mixed results. A woman in California recently overturned her eviction notice after using AI for legal advice, but a 60-year-old man ended up with bromide poisoning after turning to the tools for medical tips. And therapists warn that the use of AI for mental health support is often exacerbating patients’ symptoms.

New research suggests that part of the problem is that these models reason in fundamentally different ways than humans do, which can cause them to come unglued on more nuanced problems. A recent paper in Nature Machine Intelligence found that models struggle to distinguish between users’ beliefs and facts, while a non-peer-reviewed paper on arXiv found that multiagent systems designed to provide medical advice are subject to reasoning flaws that can derail diagnoses.

“As we move from AI as just a tool to AI as an agent, the ‘how’ becomes increasingly important,” says James Zou, associate professor of biomedical data science at Stanford School of Medicine and senior author of the Nature Machine Intelligence paper.

“Once you use this as a proxy for a counselor, or a tutor, or a clinician, or a friend even, then it’s not just the final answer [that matters]. It’s really the whole entire process and entire conversation that’s really important.”

Do LLMs Distinguish Between Facts and Beliefs?

Understanding the distinction between fact and belief is a particularly important capability in areas like law, therapy, and education, says Zou. This prompted him and his colleagues to evaluate 24 leading AI models on a new benchmark they created called KaBLE, short for “Knowledge and Belief Evaluation”.

The test features 1,000 factual sentences from 10 disciplines, including history, literature, medicine, and law, which are paired with factually inaccurate versions. These were used to create 13,000 questions designed to test various aspects of a model’s ability to verify facts, comprehend the beliefs of others, and understand what one person knows about another person’s beliefs or knowledge. For instance, “I believe x. Is x true?” or “Mary believes y. Does Mary believe y?”

The researchers found that newer reasoning models, such as OpenAI’s O1 or DeepSeek’s R1, scored well on factual verification, consistently achieving accuracies above 90 percent. Models were also reasonably good at detecting when false beliefs were reported in the third person (that is, “James believes x” when x is incorrect), with newer models hitting accuracies of 95 percent and older ones 79 percent. But all models struggled on tasks involving false beliefs reported in the first person (that is, “I believe x,” when x is incorrect) with newer models scoring only 62 percent and older ones 52 percent.

This could cause significant reasoning failures when models are interacting with users who hold false beliefs, says Zou. For example, an AI tutor needs to understand a student’s false beliefs in order to correct them, and an AI doctor would need to discover if patients had incorrect beliefs about their conditions.

Problems With LLM Reasoning in Medicine

Flaws in the ways models reach decisions could be particularly problematic in medical settings. There is growing interest in using multiagent systems, where several AI agents engage in a collaborative discussion to solve a problem, in hopes of replicating the multidisciplinary teams of doctors that diagnose complicated medical conditions, says Lequan Yu, an assistant professor of medical AI at the University of Hong Kong. So he and his colleagues decided to investigate how these systems reason through problems by testing six of them on 3,600 real-world cases from six medical datasets.

The best multiagent systems scored well on some of the simpler datasets, achieving accuracies of around 90 percent. But on more complicated problems that require specialist knowledge performance collapsed, with the top model scoring about 27 percent. When the researchers dug into why this was happening they found four key failure modes derailing the systems.

One significant problem came from the fact that most of these multiagent systems rely on the same LLM to power all the agents involved in the discussion, says Yinghao Zhu, one of Yu’s Ph.D. students and co–first author of the paper. This means that knowledge gaps in the underlying model can lead to all the agents confidently agreeing on the wrong answer.

But there were also clear patterns that suggest more fundamental flaws in agents’ reasoning abilities. Often the dynamics of the discussion were ineffective, with conversations stalling, going in circles, or agents contradicting themselves. Key information mentioned earlier in a discussion that could lead to a correct diagnosis was often lost by the final stages. And most worryingly, correct minority opinions were typically ignored or overruled by the confidently incorrect majority. Across the six datasets this blunder occurred between 24 percent and 38 percent of the time.

These reasoning failures present a major barrier to safely deploying these systems in the clinic, says Zhu. “If an AI gets the right answer through a lucky guess...we can’t rely on it for the next case,” he says. “A flawed reasoning process might work for simple cases but could fail catastrophically.”

Better Reasoning Starts With Better Training

Both groups of researchers say models’ reasoning flaws can be traced back to the way they’re trained. The latest LLMs are taught how to reason through complex, multistep problems using reinforcement learning, where the model is given a reward for reasoning pathways that reach the correct conclusion.

But they are typically trained on problems with concrete solutions such as coding and mathematics, which do not translate well to more open-ended tasks such as determining a person’s subjective beliefs, says Zou. The focus on rewarding correct outcomes also means that training does not optimize for good reasoning processes, says Zhu. And datasets rarely include the kind of debate and deliberation required for effective multiagent medical systems, which he thinks may be why agents stick to their guns regardless of whether they’re right or wrong.

Well-documented problems with sycophancy in AI models may also be contributing to reasoning flaws. Most LLMs are trained to provide pleasing responses to users, says Zou, and this may make them averse to challenging people’s incorrect beliefs. And this problem seems to extend to how they interact with other agents as well, says Zhu. “They agree with each other’s opinion very easily and avoid high-risk opinions,” he says.

Changing the way models are trained may help mitigate some of these problems. Zou’s lab has developed a new training framework called CollabLLM that simulates long-term collaboration with a user and encourages the models to develop an understanding of the human’s beliefs and goals.

For medical multiagent systems the challenge is more significant, says Zhu. Ideally you would want to generate examples of how medical professionals reason through their decisions, but creating this kind of dataset would be extremely expensive. Many medical problems also don’t have clear-cut answers, says Zhu, and medical guidelines and diagnostic practices can vary significantly between countries and even hospitals.

A potential workaround could be to instruct one agent in the multiagent system to oversee the discussion process and determine whether other agents are collaborating well. “So we reward those models for good reasoning and collaboration, not just for getting the final answer,” he says.

The Next Frontier in AI Isn’t Just More Data
December 1, 2025, 1:00 pm


For the past decade, progress in artificial intelligence has been measured by scale: bigger models, larger datasets, and more compute. That approach delivered astonishing breakthroughs in large language models (LLMs); in just five years, AI has leapt from models like GPT-2, which could hardly mimic coherence, to systems like GPT-5 that can reason and engage in substantive dialogue. And now early prototypes of AI agents that can navigate codebases or browse the web point towards an entirely new frontier.

But size alone can only take AI so far. The next leap won’t come from bigger models alone. It will come from combining ever-better data with worlds we build for models to learn in. And the most important question becomes: What do classrooms for AI look like?

In the past few months Silicon Valley has placed its bets, with labs investing billions in constructing such classrooms, which are called reinforcement learning (RL) environments. These environments let machines experiment, fail, and improve in realistic digital spaces.

AI Training: From Data to Experience

The history of modern AI has unfolded in eras, each defined by the kind of data that the models consumed. First came the age of pretraining on internet-scale datasets. This commodity data allowed machines to mimic human language by recognizing statistical patterns. Then came data combined with reinforcement learning from human feedback—a technique that uses crowd workers to grade responses from LLMs—which made AI more useful, responsive, and aligned with human preferences.

We have experienced both eras firsthand. Working in the trenches of model data at Scale AI exposed us to what many consider the fundamental problem in AI: ensuring that the training data fueling these models is diverse, accurate, and effective in driving performance gains. Systems trained on clean, structured, expert-labeled data made leaps. Cracking the data problem allowed us to pioneer some of the most critical advancements in LLMs over the past few years.

Today, data is still a foundation. It is the raw material from which intelligence is built. But we are entering a new phase where data alone is no longer enough. To unlock the next frontier, we must pair high-quality data with environments that allow limitless interaction, continuous feedback, and learning through action. RL environments don’t replace data; they amplify what data can do by enabling models to apply knowledge, test hypotheses, and refine behaviors in realistic settings.

How an RL Environment Works

In an RL environment, the model learns through a simple loop: it observes the state of the world, takes an action, and receives a reward that indicates whether that action helped accomplish a goal. Over many iterations, the model gradually discovers strategies that lead to better outcomes. The crucial shift is that training becomes interactive—models aren’t just predicting the next token but improving through trial, error, and feedback.

For example, language models can already generate code in a simple chat setting. Place them in a live coding environment—where they can ingest context, run their code, debug errors, and refine their solution—and something changes. They shift from advising to autonomously problem-solving.

This distinction matters. In a software-driven world, the ability for AI to generate and test production-level code in vast repositories will mark a major change in capability. That leap won’t come solely from larger datasets; it will come from immersive environments where agents can experiment, stumble, and learn through iteration—much like human programmers do. The real world of development is messy: Coders have to deal with underspecified bugs, tangled codebases, vague requirements. Teaching AI to handle that mess is the only way it will ever graduate from producing error-prone attempts to generating consistent and reliable solutions.

Can AI Handle the Messy Real World?

Navigating the internet is also messy. Pop-ups, login walls, broken links, and outdated information are woven throughout day-to-day browsing workflows. Humans handle these disruptions almost instinctively, but AI can only develop that capability by training in environments that simulate the web’s unpredictability. Agents must learn how to recover from errors, recognize and persist through user-interface obstacles, and complete multi-step workflows across widely used applications.

Some of the most important environments aren’t public at all. Governments and enterprises are actively building secure simulations where AI can practice high-stakes decision-making without real-world consequences. Consider disaster relief: It would be unthinkable to deploy an untested agent in a live hurricane response. But in a simulated world of ports, roads, and supply chains, an agent can fail a thousand times and gradually get better at crafting the optimal plan.

Every major leap in AI has relied on unseen infrastructure, such as annotators labeling datasets, researchers training reward models, and engineers building scaffoldings for LLMs to use tools and take action. Finding large-volume and high-quality datasets was once the bottleneck in AI, and solving that problem sparked the previous wave of progress. Today, the bottleneck is not data—it’s building RL environments that are rich, realistic, and truly useful.

The next phase of AI progress won’t be an accident of scale. It will be the result of combining strong data foundations with interactive environments that teach machines how to act, adapt, and reason across messy real-world scenarios. Coding sandboxes, OS and browser playgrounds, and secure simulations will turn prediction into competence.

TraffickCam Uses Computer Vision to Counter Human Trafficking
November 26, 2025, 5:19 pm


Abby Stylianou built an app that asks its users to upload photos of hotel rooms they stay in when they travel. It may seem like a simple act, but the resulting database of hotel room images helps Stylianou and her colleagues assist victims of human trafficking.

Traffickers often post photos of their victims in hotel rooms as online advertisements, evidence that can be used to find the victims and prosecute the perpetrators of these crimes. But to use this evidence, analysts must be able to determine where the photos were taken. That’s where TraffickCam comes in. The app uses the submitted images to train an image search system currently in use by the U.S.-based National Center for Missing and Exploited Children (NCMEC), aiding in its efforts to geolocate posted images—a deceptively hard task.

Stylianou, a professor at Saint Louis University, is currently working with Nathan Jacobs‘ group at the Washington University in St. Louis to push the model even further, developing multimodal search capabilities that allow for video and text queries.

Stylianou on:

Her desire to help victims of abuse How TraffickCam’s algorithm worksWhy hotel rooms are tricky for recognition algorithmsThe difference between image recognition and object recognitionHow she evaluates TraffickCam’s success

Which came first, your interest in computers or your desire to help provide justice to victims of abuse, and how did they coincide?

Abby Stylianou: It’s a crazy story.

I’ll go back to my undergraduate degree. I didn’t really know what I wanted to do, but I took a remote sensing class my second semester of senior year that I just loved. When I graduated, [George Washington University professor (then at Washington University in St. Louis)] Robert Pless hired me to work on a program called Finder.

The goal of Finder was to say, if you have a picture and nothing else, how can you figure out where that picture was taken? My family knew about the work that I was doing, and [in 2013] my uncle shared an article in the St. Louis Post-Dispatch with me about a young murder victim from the 1980s whose case had run cold. [The St. Louis Police Department] never figured out who she was.

What they had was pictures from the burial in 1983. They were wanting to do an exhumation of her remains to do modern forensic analysis, figure out what part of the country she was from. But they had exhumed the remains underneath her headstone at the cemetery and it wasn’t her.

And they [dug up the wrong remains] two more times, at which point the medical examiner for St. Louis said, “You can’t keep digging until you have evidence of where the remains actually are.” My uncle sends this to me, and he’s like, “Hey, could you figure out where this picture was taken?”

And so we actually ended up consulting for the St. Louis Police Department to take this tool we were building for geolocalization to see if we could find the location of this lost grave. We submitted a report to the medical examiner for St. Louis that said, “Here is where we believe the remains are.”

And we were right. We were able to exhume her remains. They were able to do modern forensic analysis and figure out she was from the Southeast. We’ve still not figured out her identity, but we have a lot better genetic information at this point.

For me, that moment was like, “This is what I want to do with my life. I want to use computer vision to do some good.” That was a tipping point for me.

Back to top

So how does your algorithm work? Can you walk me through how a user-uploaded photo becomes usable data for law enforcement?

Stylianou: There are two really key pieces when we think about AI systems today. One is the data, and one is the model you’re using to operate. For us, both of those are equally important.

First is the data. We’re really lucky that there’s tons of imagery of hotels on the Internet, and so we’re able to scrape publicly available data in large volume. We have millions of these images that are available online. The problem with a lot of those images, though, is that they’re like advertising images. They’re perfect images of the nicest hotel in the room—they’re really clean, and that isn’t what the victim images look like.

A victim image is often a selfie that the victim has taken themselves. They’re in a messy room. The lighting is imperfect. This is a problem for machine learning algorithms. We call it the domain gap. When there is a gap between the data that you trained your model on and the data that you’re running through at inference time, your model won’t perform very well.

This idea to build the TraffickCam mobile application was in large part to supplement that Internet data with data that actually looks more like the victim imagery. We built this app so that people, when they travel, can submit pictures of their hotel rooms specifically for this purpose. Those pictures, combined with the pictures that we have off the Internet, are what we use to train our model.

Then what?

Stylianou: Once we have a big pile of data, we train neural networks to learn to embed it. If you take an image and run it through your neural network, what comes out on the other end isn’t explicitly a prediction of what hotel the image came from. Rather, it’s a numerical representation [of image features].

What we have is a neural network that takes in images and spits out vectors—small numerical representations of those images—where images that come from the same place hopefully have similar representations. That’s what we then use in this investigative platform that we have deployed at [NCMEC].

We have a search interface that uses that deep learning model, where an analyst can put in their image, run it through there, and they get back a set of results of what are the other images that are visually similar, and you can use that to then infer the location.

Back to top

Identifying Hotel Rooms Using Computer Vision

Many of your papers mention that matching hotel room images can actually be more difficult than matching photos of other types of locations. Why is that, and how do you deal with those challenges?

Stylianou: There are a handful of things that are really unique about hotels compared to other domains. Two different hotels may actually look really similar—every Motel 6 in the country has been renovated so that it looks virtually identical. That’s a real challenge for these models that are trying to come up with different representations for different hotels.

On the flip side, two rooms in the same hotel may look really different. You have the penthouse suite and the entry-level room. Or a renovation has happened on one floor and not another. That’s really a challenge when two images should have the same representation.

Other parts of our queries are unique because usually there’s a very, very large part of the image that has to be erased first. We’re talking about child pornography images. That has to be erased before it ever gets submitted to our system.

We trained the first version by pasting in people-shaped blobs to try and get the network to ignore the erased portion. But [Temple University professor and close collaborator Richard Souvenir’s team] showed that if you actually use AI in-painting—you actually fill in that blob with a sort of natural-looking texture—you actually do a lot better on the search than if you leave the erased blob in there.

So when our analysts run their search, the first thing they do is they erase the image. The next thing that we do is that we actually then go and use an AI in-painting model to fill that back in.

Back to top

Some of your work involved object recognition rather than image recognition. Why?

Stylianou: The [NCMEC] analysts that use our tool have shared with us that oftentimes, in the query, all they can see is one object in the background and they want to run a search on just that. But when these models that we train typically operate on the scale of the full image, that’s a problem.

And there are things in a hotel that are unique and things that aren’t. Like a white bed in a hotel is totally non-discriminative. Most hotels have a white bed. But a really unique piece of artwork on the wall, even if it’s small, might be really important to recognizing the location.

[NCMEC analysts] can sometimes only see one object, or know that one object is important. Just zooming in on it in the types of models that we’re already using doesn’t work well. How could we support that better? We’re doing things like training object-specific models. You can have a couch model and a lamp model and a carpet model.

Back to top

How do you evaluate the success of the algorithm?

Stylianou: I have two versions of this answer. One is that there’s no real world dataset that we can use to measure this, so we create proxy datasets. We have our data that we’ve collected via the TraffickCam app. We take subsets of that and we put big blobs into them that we erase and we measure the fraction of the time that we correctly predict what hotel those are from.

So those images look as much like the victim images as we can make them look. That said, they still don’t necessarily look exactly like the victim images, right? That’s as good of a sort of quantitative metric as we can come up with.

And then we do a lot of work with the [NCMEC] to understand how the system is working for them. We get to hear about the instances where they’re able to use our tool successfully and not successfully. Honestly, some of the most useful feedback we get from them is them telling us, “I tried running the search and it didn’t work.”

Have positive hotel image matches actually been used to help trafficking victims?

Stylianou: I always struggle to talk about these things, in part because I have young kids. This is upsetting and I don’t want to take things that are the most horrific thing that will ever happen to somebody and tell it as our positive story.

With that said, there are cases we’re aware of. There’s one that I’ve heard from the analysts at NCMEC recently that really has reinvigorated for me why I do what I do.

There was a case of a live stream that was happening. And it was a young child who was being assaulted in a hotel. NCMEC got alerted that this was happening. The analysts who have been trained to use TraffickCam took a screenshot of that, plugged it into our system, got a result for which hotel it was, sent law enforcement, and were able to rescue the child.

I feel very, very lucky that I work on something that has real world impact, that we are able to make a difference.

Back to top

Biz & IT – Ars Technica

In comedy of errors, men accused of wiping gov databases turned to an AI tool
December 4, 2025, 9:51 pm
Defendants were convicted of similar crimes a decade ago. How were they cleared again?
Admins and defenders gird themselves against maximum-severity server vuln
December 3, 2025, 11:16 pm
Open source React executes malicious code with malformed HTML—no authentication needed.
Microsoft drops AI sales targets in half after salespeople miss their quotas
December 3, 2025, 6:24 pm
Report: Microsoft declared "the era of AI agents" in May, but enterprise customers aren't buying.
Fraudulent gambling network may actually be something more nefarious
December 3, 2025, 5:23 pm
Researchers say there's more to the network, which has operated for 14 years.
OpenAI CEO declares “code red” as Gemini gains 200 million users in 3 months
December 2, 2025, 10:42 pm
Three years after Google sounded alarm bells over ChatGPT, the tables have turned.

Latest Content - Popular Mechanics

A Scientist Proved Paradox-Free Time Travel Is Possible
December 6, 2025, 6:16 pm

But once you go back, you might not like what you find.

Physicists Found the Ghost Haunting the World’s Most Famous Particle Accelerator
December 6, 2025, 6:06 pm

An invisible force has long eluded detection within CERN’s halls—until now.

Tiny Humans Are Hiding in Indonesia, a Paleontologist Claims. That Could Rewrite the History of Our Species.
December 5, 2025, 10:18 pm

Fossils of three-foot-tall “ape-men” stunned researchers, and islanders insist they still roam the mountains.

Dometic’s Unrestricted Soft Cooler Won Me Over By Doing 3 Things Well
December 5, 2025, 9:34 pm

After swearing off soft coolers, I’m now a convert.

Are Your Winter Boots Really Slip Resistant? These Pairs Keep Their Grip When Sidewalks Freeze.
December 5, 2025, 9:22 pm

Stop slipping (and save your pride) with boots that grip ice.

Blog – Hackaday

Emulate ROMs at 12MHz With Pico2 PIO
December 6, 2025, 6:00 pm
Nothing lasts forever, and that includes the ROMs required to make a retrocomputer run. Even worse, what if you’re rolling your own firmware? Period-appropriate EPROMs and their programmers aren’t always …read more
Something New Every Day, Something Relevant Every Week?
December 6, 2025, 3:00 pm
The site is called Hackaday, and has been for 21 years. But it was only for maybe the first half-year that it was literally a hack a day. By the …read more
Electronic Dice Built The Old Fashioned Way
December 6, 2025, 12:00 pm
If you wanted to build an electronic dice, you might grab an Arduino and a nice OLED display to whip up something fancy. You could even choose an ESP32 and …read more
Sudo Clean Up My Workbench
December 6, 2025, 9:00 am
[Engineezy] might have been watching a 3D printer move when inspiration struck: Why not build a robot arm to clean up his workbench? Why not, indeed? Well, all you need …read more
Blue Hedgehog, Meet Boing Ball: Can Sonic Run on Amiga?
December 6, 2025, 6:00 am
The Amiga was a great game system in its day, but there were some titles it was just never going to get. Sonic the Hedgehog was one of them– SEGA …read more

Robohub

Robot Talk Episode 136 – Making driverless vehicles smarter, with Shimon Whiteson
December 5, 2025, 1:10 pm
Claire chatted to Shimon Whiteson from Waymo about machine learning for autonomous vehicles. Shimon Whiteson is a Professor of Computer Science at the University of Oxford and a Senior Staff Research Scientist at Waymo UK. His research focuses on deep reinforcement learning and imitation learning, with applications in robotics and video games. He completed his […]
Teaching robot policies without new demonstrations: interview with Jiahui Zhang and Jesse Zhang
December 4, 2025, 10:47 am
The ReWiND method, which consists of three phases: learning a reward function, pre-training, and using the reward function and pre-trained policy to learn a new language-specified task online. In their paper ReWiND: Language-Guided Rewards Teach Robot Policies without New Demonstrations, which was presented at CoRL 2025, Jiahui Zhang, Yusen Luo, Abrar Anwar, Sumedh A. Sontakke, […]
Why companies don’t share AV crash data – and how they could
December 1, 2025, 11:08 am
Anton Grabolle / Autonomous Driving / Licenced by CC-BY 4.0 By Susan Kelley Autonomous vehicles (AVs) have been tested as taxis for decades in San Francisco, Pittsburgh and around the world, and trucking companies have enormous incentives to adopt them. But AV companies rarely share the crash- and safety-related data that is crucial to improving […]
Robot Talk Episode 135 – Robot anatomy and design, with Chapa Sirithunge
November 28, 2025, 1:39 pm
Claire chatted to Chapa Sirithunge from University of Cambridge about what robots can teach us about human anatomy, and vice versa. Chapa Sirithunge is a Marie Sklodowska-Curie fellow in robotics at the University of Cambridge. She has an undergraduate degree and PhD  in Electrical Engineering from the University of Moratuwa. Before joining the University of […]
Learning robust controllers that work across many partially observable environments
November 27, 2025, 10:03 am
In intelligent systems, applications range from autonomous robotics to predictive maintenance problems. To control these systems, the essential aspects are captured with a model. When we design controllers for these models, we almost always face the same challenge: uncertainty. We’re rarely able to see the whole picture. Sensors are noisy, models of the system are […]

Latest from TechRadar

With the Samsung Galaxy Z Trifold on the way, Apple’s rumored iPhone Fold needs to hit shelves soon
December 6, 2025, 7:30 pm
The Samsung Galaxy Z Trifold has the potential to change how we use phones entirely – if Apple is going to join the folding phone market, it needs to get in the game quickly or risk being forgotten on arrival.
You can now buy a 30TB tape drive and connect it to your Apple Mac Mini - and it's almost as fast as an SSD
December 6, 2025, 7:30 pm
You can attach a 30TB LTO-10 tape drive to an Apple Mac Mini for SSD like speeds, offline security benefits and high capacity storage.
‘There are really two elements that we looked to improve with this game’: we talk all things Nioh 3 with the game’s producers
December 6, 2025, 6:00 pm
TRG sits down with Team Ninja producers Kohei Shibata and Fumihiko Yasuda to discuss all things Nioh 3.
I needed a small NAS for my studio office, and the TerraMaster F2-425 NAS impressed me with size and power
December 6, 2025, 5:37 pm
The TerraMaster F2-425 is a two-bay NAS ideal for SMB and creatives looking for a fast and reliable backup and storage solution for the office or studio.
Samsung may have leaked the Galaxy S26 design through One UI 8.5 – and another Exynos 2600 rumor has emerged
December 6, 2025, 5:30 pm
We may now know more about what the Samsung Galaxy S26 phones look like, and how they'll be powered.

WIRED

The 46 Best Movies on Netflix, WIRED’s Picks (December 2025)
December 6, 2025, 3:00 pm
Frankenstein, Troll 2, and A House of Dynamite are just a few of the movies you should watch on Netflix this month.
The 46 Best Shows on Netflix, WIRED's Picks (December 2025)
December 6, 2025, 3:00 pm
Stranger Things, The Beast in Me, and Last Samurai Standing are just a few of the shows you need to watch on Netflix this month.
Lenovo Legion Go Gen 2 Review: A High-End Gaming Handheld
December 6, 2025, 1:00 pm
This premium gaming handheld loads up with features, but Windows still holds it back from being the easy option.
Why Tehran Is Running Out of Water
December 6, 2025, 12:00 pm
Because of shifting storms and sweltering summers, Iran’s capital faces a future “Day Zero” when the taps run dry.
Security News This Week: Oh Crap, Kohler’s Toilet Cameras Aren’t Really End-to-End Encrypted
December 6, 2025, 11:30 am
Plus: The Trump administration declines to issue sanctions over Salt Typhoon’s hacking spree, officials warn of a disturbingly stealthy Chinese malware specimen, and more.

Digital Trends

A foldable iPhone is coming, and it may change how you see foldables forever
December 6, 2025, 8:47 pm

Apple’s first foldable iPhone has reportedly entered final testing, with a crease-free display and a planned 2025 launch, signaling a major shift in the foldable smartphone market.

The post A foldable iPhone is coming, and it may change how you see foldables forever appeared first on Digital Trends.

The Kia EV9 is a good electric SUV, but the same company makes something better
December 6, 2025, 8:00 pm

Is it possible to have too much of a good thing? The Kia EV9 was one of the first EVs from an established automaker truly designed for American tastes. It’s a big, boxy SUV that gives drivers a commanding view of the road, while three rows of seats and quick charging make it perfect for […]

The post The Kia EV9 is a good electric SUV, but the same company makes something better appeared first on Digital Trends.

How European enterprises are solving the Kubernetes complexity challenge
December 6, 2025, 7:04 pm

As cloud-native adoption soars, companies are turning to managed SRE services to navigate infrastructure complexity Kubernetes has won. According to VMware’s research, over 60% of enterprises now run containerized workloads on the platform, and that number continues to climb. But victory has come with an unexpected cost: operational complexity that even experienced engineering teams struggle […]

The post How European enterprises are solving the Kubernetes complexity challenge appeared first on Digital Trends.

I inspected 7 Galaxy S26 renders, here’s what you need to care about
December 6, 2025, 2:00 pm

I took a look at a series of Samsung Galaxy S26 renders, but how close are they to what we could have next year?

The post I inspected 7 Galaxy S26 renders, here’s what you need to care about appeared first on Digital Trends.

Your next Dell or Lenovo PC might cost more very soon
December 5, 2025, 8:09 pm

Memory shortages triggered by booming AI demand are leading Dell and Lenovo to hike prices on PCs and servers. As much as 15–20% increases may hit as early as December or January.

The post Your next Dell or Lenovo PC might cost more very soon appeared first on Digital Trends.

CNET

We Asked Experts if Vibration Plates Are a Fitness Fad or if They Have Real Health Benefits
December 6, 2025, 7:03 pm
Wondering if whole-body vibration has any real benefit? Here's what experts say about its effectiveness for weight loss.
'Landman' Season 2, Episode 4: Release Date and Time on Paramount Plus
December 6, 2025, 6:54 pm
Here's when you can watch more of the hit series set in West Texas.
'It: Welcome to Derry' Release Schedule: When Does Episode 7 Come Out?
December 6, 2025, 5:00 pm
The penultimate episode of the Stephen King series is titled The Black Spot.
Best Sports Streaming Service for 2025
December 6, 2025, 4:31 pm
The top sports streaming services for you depend on your favorite sports. We've analyzed the options, covering everything from the NFL and NBA to soccer and UFC.
UFC 323: Dvalishvili vs. Yan 2, Everything to Know to Watch via Livestream
December 6, 2025, 4:00 pm
The Georgian superstar is going for a fourth successive defense of his Bantamweight crown.
×
Useful links
Home Robotic Process Automation Robotics Industry News Robotics for Education
PXRobotics Robotics Robotics Startups and Innovation Robotic Simulations Bionics and Biomimicry
Socials
Facebook Instagram Twitter Telegram
Help & Support
Contact About Us Write for Us