Experts warn that Meta’s decision to end its third-party fact-checking program could allow disinformation and hate to fester online and permeate the real world.
Science
Donald Trump picks billionaire Jared Isaacman to lead NASA
Isaacman is set to replace former Florida Senator Bill Nelson as NASA Administrator, who President Joe Biden tapped to lead the agency when voted into office. Aside from Polaris Dawn, Isaacman also funded Inspiration 4, a mission that took him and three other non-professional astronauts to space atop SpaceX’s Falcon 9 rocket in 2021.
“With the support of President Trump, I can promise you this: We will never again lose our ability to journey to the stars and never settle for second place,” Isaacman wrote on X. “Americans will walk on the Moon and Mars and in doing so, we will make life better here on Earth.”
Isaacman joins the other group of unconventional nominees Trump has chosen to head up various government agencies and advisory committees, including the new “Department of Government Efficiency“ led by Musk and Vivek Ramaswamy.
Science
Meta is leaving its users to wade through hate and disinformation
The company announced today that it’s phasing out a program launched in 2016 where it partners with independent fact-checkers around the world to identify and review misinformation across its social media platforms. Meta is replacing the program with a crowdsourced approach to content moderation similar to X’s Community Notes.
Meta is essentially shifting responsibility to users to weed out lies on Facebook, Instagram, Threads, and WhatsApp, raising fears that it’ll be easier to spread misleading information about climate change, clean energy, public health risks, and communities often targeted with violence.
“It’s going to hurt Meta’s users first”
“It’s going to hurt Meta’s users first because the program worked well at reducing the virality of hoax content and conspiracy theories,” says Angie Drobnic Holan, director of the International Fact-Checking Network (IFCN) at Poynter.
“A lot of people think Community Notes-style moderation doesn’t work at all and it’s merely window dressing so that platforms can say they’re doing something … most people do not want to have to wade through a bunch of misinformation on social media, fact checking everything for themselves,” Holan adds. “The losers here are people who want to be able to go on social media and not be overwhelmed with false information.”
In a video, Meta CEO Mark Zuckerberg claimed the decision was a matter of promoting free speech while also calling fact-checkers “too politically biased.” Meta also said that its program was too sensitive and that 1 to 2 out of every 10 pieces of content it took down in December were mistakes and might not have actually violated company policies.
Holan says the video was “incredibly unfair” to fact-checkers who have worked with Meta as partners for nearly a decade. Meta worked specifically with IFCN-certified fact-checkers who had to follow the network’s Code of Principles as well as Meta’s own policies. Fact-checkers reviewed content and rated its accuracy. But Meta — not fact-checkers — makes the call when it comes to removing content or limiting its reach.
Poynter owns PolitiFact, which is one of the fact-checking partners Meta works with in the US. Holan was the editor-in-chief of PolitiFact before stepping into her role at IFCN. What makes the fact-checking program effective is that it serves as a “speed bump in the way of false information,” Holan says. Content that’s flagged typically has a screen placed over it to let users know that fact-checkers found the claim questionable and asks whether they still want to see it.
That process covers a broad range of topics, from false information about celebrities dying to claims about miracle cures, Holan notes. Meta launched the program in 2016 with growing public concern around the potential for social media to amplify unverified rumors online, like false stories about the pope endorsing Donald Trump for president that year.
Meta’s decision looks more like an effort to curry favor with President-elect Trump. In his video, Zuckerberg described recent elections as “a cultural tipping point” toward free speech. The company recently named Republican lobbyist Joel Kaplan as its new chief global affairs officer and added UFC CEO and president Dana White, a close friend of Trump, to its board. Trump also said today that the changes at Meta were “probably” in response to his threats.
“Zuck’s announcement is a full bending of the knee to Trump and an attempt to catch up to [Elon] Musk in his race to the bottom. The implications are going to be widespread,” Nina Jankowicz, CEO of the nonprofit American Sunlight Project and an adjunct professor at Syracuse University who researches disinformation, said in a post on Bluesky.
Twitter launched its community moderation program, called Birdwatch at the time, in 2021, before Musk took over. Musk, who helped bankroll Trump’s campaign and is now set to lead the incoming administration’s new “Department of Government Efficiency,” leaned into Community Notes after slashing the teams responsible for content moderation at Twitter. Hate speech — including slurs against Black and transgender people — increased on the platform after Musk bought the company, according to research by the Center for Countering Digital Hate. (Musk then sued the center, but a federal judge dismissed the case last year.)
Advocates are now worried that harmful content might spread unhindered on Meta’s platforms. “Meta is now saying it’s up to you to spot the lies on its platforms, and that it’s not their problem if you can’t tell the difference, even if those lies, hate, or scams end up hurting you,” Imran Ahmed, founder and CEO of the Center for Countering Digital Hate, said in an email. Ahmed describes it as a “huge step back for online safety, transparency, and accountability” and says “it could have terrible offline consequences in the form of real-world harm.”
“By abandoning fact-checking, Meta is opening the door to unchecked hateful disinformation about already targeted communities like Black, brown, immigrant and trans people, which too often leads to offline violence,” Nicole Sugerman, campaign manager at the nonprofit Kairos that works to counter race- and gender-based hate online, said in an emailed statement to The Verge today.
Meta’s announcement today specifically says that it’s “getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate.”
Scientists and environmental groups are wary of the changes at Meta, too. “Mark Zuckerberg’s decision to abandon efforts to check facts and correct misinformation and disinformation means that anti-scientific content will continue to proliferate on Meta platforms,” Kate Cell, senior climate campaign manager at the Union of Concerned Scientists, said in an emailed statement.
“I think this is a terrible decision … disinformation’s effects on our policies have become more and more obvious,” says Michael Khoo, a climate disinformation program director at Friends of the Earth. He points to attacks on wind power affecting renewable energy projects as an example.
Khoo also likens the Community Notes approach to the fossil fuel industry’s marketing of recycling as a solution to plastic waste. In reality, recycling has done little to stem the tide of plastic pollution flooding into the environment since the material is difficult to rehash and many plastic products are not really recyclable. The strategy also puts the onus on consumers to deal with a company’s waste. “[Tech] companies need to own the problem of disinformation that their own algorithms are creating,” Khoo tells The Verge.
Science
Blue Ghost Lunar Lander scheduled to launch on January 15th
SpaceX’s Falcon 9 is scheduled to launch at approximately 1:11 AM EST, and will not only have Firefly Aerospace’s Blue Ghost 1 lander on board, but also the Resilience lander from the Japanese robotic spacecraft firm iSpace. It will take 45 days for the craft to journey to the Moon before it spends another 14 days carrying out surface operations. There’s no word on whether we’ll be able to watch it take off.
The Firefly lander will carry 10 NASA payloads to the surface. They’re designed to measure various particulate compositions, thermal properties, and electromagnetic activity of both the Moon and the Earth. It’ll collect data for various applications, from improving landing and takeoff procedures to learning about the Moon’s resources and its history.
The so-called LEXI payload is particularly interesting — it’s an x-ray machine that can read the Earth’s magnetic field. NASA will use the data to see how our magnetosphere interacts with solar winds, which could ultimately help accurately detect and track solar weather patterns that cause power outages on Earth and interfere with satellite and GPS systems.
This would be NASA’s second attempt to deploy such technology. It first launched the device, then known as STORM, into space in 2012. That one didn’t land on the moon, however, and wasn’t able to get the full picture that LEXI’s wide-angle sensors will be able to capture.
Science
The Evie Ring’s new AI chatbot is trained only on medical journals
AI is the big buzzword in health tech at CES 2025. Everywhere you look, there are AI algorithms, AI health recommendations, and AI chatbots. The thing is, AI’s got a reputation for making things up — and when it comes to health, the stakes for accuracy and privacy are high.
That’s why smart ring maker Movano wants to make one thing abundantly clear about its new chatbot, EvieAI: this one has been post-trained exclusively on peer-reviewed medical journals.
EvieAI was designed to be a more accurate alternative to something like ChatGPT. The difference is, unlike ChatGPT and other similar generative AI assistants, EvieAI theoretically won’t be pulling from vast repositories of public data where health and wellness misinformation runs rife. According to Movano CEO John Mastrototaro, it’s been trained on and will be constrained to over 100,000 medical journals written by medical professionals.
All the data the LLM has access to comes from accredited sources that have been referred to by a medical advisory board, Mastrototaro says. That includes FDA-approved journals, practices, and procedures. EvieAI is a bounded LLM, which means it will only speak to data from the “post-training” phase after it’s been initially created. In this case, that means medical data. The data is then cross-referenced with organizations like the Mayo Clinic, Harvard, and UCLA. The LLM does this by referencing this outside data before answering and making sure there isn’t a conflict.
The result, according to Movano, is 99 percent accuracy, though we weren’t able to test EvieAI for ourselves before CES. The company says this is possible because anytime you query EvieAI, the LLM is tracking to see if the information given in the conversation is consistent and accurate compared to the data it’s been trained on.
Achieving that level of accuracy is a tall order and a bold claim. Most chatbots don’t make reliably accurate statements, and some specifically steer clear of health and medicine precisely because the stakes are so high. When I ask about AI’s tendency to hallucinate, however, I’m firmly told that Movano isn’t afraid for EvieAI to tell users it doesn’t have an answer.
“If you ask it ‘What do you think about the election?’, it’s not going to respond,” says Mastrototaro. “It’s not going to tell you because it doesn’t have any information about that.”
“I think that it’s okay to say no if you don’t know the answer to something,” he adds. “And I think sometimes, with the other tools out there, they’re gonna answer one way or another, whether it’s right or wrong. We’re just only gonna give an answer if it’s right.”
EvieAI is meant to be a conversational resource that gives clear and concise answers to health and wellness questions, with an emphasis on women’s health (much like the company’s Evie Ring).
Even so, health, wellness, and medicine are an ever-shifting landscape. Even peer-reviewed studies can present contradictory findings. Doctors don’t always agree on emerging science. By and large, health tech has also steered clear of anything that could be considered diagnostic or medical advice — something that would require FDA oversight.
To that end, Mastrototaro says the LLM is updated monthly with new approved documents such as medical journals and articles detailing breakthroughs. He also emphasizes that EvieAI is steering clear of anything diagnostic. The AI will not get into treatment but act more as a guide that asks clarifying questions to steer you in the right direction. For example, if you suspect that you might have diabetes, it may ask clarifying questions about whether you have experienced low vision or weight gain as well as inquire about your diet. But if you tell it you’ve chopped your finger off, or express that you’re experiencing suicidal ideation, it’ll direct you to the ER or to the number to call an appropriate hotline. The hope is that EvieAI can help people better research and prepare for a doctor’s visit in a way that’s more natural and supportive than, say, falling down a WebMD rabbit hole.
As for privacy, Movano says EvieAI will follow industry-standard encryption standards in storage and transmission and that any chats can’t be traced back to individuals. Mastrototaro also says conversation data will be periodically deleted and won’t be used for targeted ads, either.
It can be easy to roll one’s eyes at promises of privacy and accuracy in health tech. Movano has thus far shown a dogged dedication to adhering to medical industry best practices and standards. It recently gained FDA clearance for its EvieMED ring, an enterprise version of its ring aimed at remote patient monitoring and clinical trials. Movano also recently relaunched the consumer version of its Evie Ring to better address initial feedback from customers, like improved sleep and heart rate accuracy.
In the future, Movano hopes to eventually further incorporate individual health data collected by its smart rings. But for now, a beta version will roll out starting on January 8th to existing Evie Ring users within the Evie app at no extra cost.
-
Startup Stories1 year ago
Why Millennials, GenZs Are Riding The Investment Tech Wave In India
-
Startup Stories1 year ago
Startups That Caught Our Eyes In September 2023
-
Startup Stories1 year ago
How Raaho Is Using Tech To Transform India’s Fragmented Commercial Trucking
-
Startup Stories1 year ago
Meet The 10 Indian Startup Gems In The Indian Jewellery Industry’s Crown
-
Startup Stories1 year ago
WOW Skin Science’s Blueprint For Breaking Through In The $783 Bn BPC Segment
-
Crptocurrency10 months ago
Lither is Making Crypto Safe, Fun, and Profitable for Everyone!
-
Startup Stories1 year ago
How Volt Money Is Unlocking The Value Of Mutual Funds With Secured Lending
-
E-commerce1 year ago
Top Online Couponing Trends To Watch Out For In 2016