Markiplier is paying expert researchers to figure out if AI can be Real Good for humanity

By 08/21/2025
Markiplier is paying expert researchers to figure out if AI can be Real Good for humanity

Like most people, Markiplier got his first inkling of the generative “AI” gold rush with OpenAI‘s DALL-E. The concept of AI had been around for decades–Turing, after all, created his famous test back in 1950, and digital-native Millennials will remember growing up with chatbots like SmarterChild back in the early 2000s. DALL-E, though, was the first publicly accessible product that linked text input with image output.

And all that output was “garbage,” Markiplier (aka Mark Fischbach) says.

He made a YouTube video shortly after DALL-E released in 2022, poking fun at the generator’s surreal, nightmarish, and thoroughly memeable productions. It couldn’t nail faces. It gave people 15 fingers and 500 teeth. Backgrounds were fuzzy at best; objects were conglomerate blobs. The video premise was a joke, seeing how bad DALL-E’s “art” could be. But even as he uploaded the video, something about it nagged at him. Eventually he realized: DALL-E scared him–and not because its generations looked like something out of a low-budget found-footage horror movie.

Tubefilter

Subscribe to get the latest creator news

Subscribe

It scared him because it was just the beginning.

“I grew up around computers. From the age of five, I was around computers with my dad,” he says. “I knew the pace would increase. A lot of companies are very interested in the next big thing, the next big technological push. When you have that much money going into it, it will start to iterate.”

He was right. Big tech and startups alike were fresh off the web3/metaverse/crypto/NFT bandwagons, which had been profitable for only a select few, and were looking for trendy new places to put their money, hoping to strike it rich this time. The late 2022 debut of OpenAI’s ChatGPT took interest to a fever pitch, and billions of dollars began pouring into gen AI development, flowing from companies like Google, Microsoft, and Meta. In just a couple years, their backing took gen AI from 15 fingers and 500 teeth to outputting video that’s troublingly indistinguishable from reality.

Fast-forward to 2025, and “AI” is an omnipresent buzzterm. Companies have burned cash bending their entire business models around incorporating large language models and releasing generative products. We are swarmed with marketing that tells us to accept that gen AI is the future. There are tools that give YouTubers their next video idea, writers their next book chapter, entrepreneurs their next startup concept. After all, don’t we want to be more productive? Don’t we want to think less?

Fischbach has kept a close eye on AI’s development.

To get a better understanding of how generative technology is being applied in the real world, he did an internship with production/VFX studio Corridor Digital and consulted with Dr. Amanda Muyskens, an accomplished data scientist with a PhD in statistics (who happens to be married to his longtime friend and fellow creator Bob Muyskens).

The technology, he says, is fascinating. (He’s also quick to point out that ChatGPT and other popular products aren’t actually AI; they’re large language models. “They don’t think,” he says. “They just don’t. That’s not how the technology works.”) The problem is that many companies utilizing it are taking advantage of the fact that AI input and output is not just “a legal grey area, but a legal dark abyss area,” he says.

“It can’t be a secret that Google, owner of YouTube, used all YouTube videos,” he says. “Put ‘Let’s Play’ into Veo 3, and it recreates it. It even recreates Fortnite accurately. Oftentimes these companies will change their TOS to add in language to put them in a less indefensible position.”

Laws and regulations, he adds, cannot keep up with the speed at which AI companies are moving, especially when it comes to copyright: “You put up a wall, and they’re already ten paces past the wall.”

In 2024, frustrated by how AI could be used to better humanity and how the companies using it aren’t doing that, he went to the Muyskens.

Together, they founded Real Good AI, a 501c3 nonprofit researching how AI can be used for positive, humanitarian impacts. 

“Where we are now with AI, there are some serious concerns about how it will or will not work going forward, from where the technology stands,” Bob says. “Mark and Mandy share an interest in AI, and it went from casual conversations to Mark feeling like something needs to be done.”

Dr. Muyskens “has the expertise and knowledge and connections to other scientists to do something about it, in terms of research and developing new tools and techniques,” he adds. Prior to founding Real Good AI, she worked for five years as a scientific researcher and principal investigator, with areas of specialty including Gaussian processes, uncertainty quantification, scaling machine learning algorithms, and communicating science.

Real Good AI is “focused on trying to ethically pursue ways to improve and new methods to make these tools work for humanity,” Bob says. (It’s also staunchly against “AI art.”)

Some of its initial projects include:

  • Using data analysis to help nonprofits learn from other nonprofits in their area, examining best practices and funding histories to project what sort of running budget they can expect
  • Using data analysis to understand the impacts of public policy on homelessness in elder members of the population, in an effort to reinforce policies that aren’t seeing the results they were supposed to, and to find policies that are especially effective
  • Figuring out how to identify when LLMs are lying, and getting those LLMs to communicate to users if and when the factual accuracy of output is in question

“This is a lofty idea,” Bob explains, about the lying project. “One of the problems with LLMs currently is they have no way to credit their source. Something gets generated, but there’s no, ‘I pulled from this artist, this animation.’ They don’t know. The way these models work, they can’t tell where things come from. It’s not trying to obscure that information, it just can’t tell you.”

Real Good AI’s thought process is to make an overlay, something on top of the model where a user would still get output, but then the LLM would say, “This is a pretty good answer, I’m pretty confident, this is being pulled from X source,” Bob says. “Or it could say, ‘I’m making this from thin air, I’m extrapolating, you should not be confident or comfortable this is coming from factual sources.'”

Current LLMs “give you the appearance of ‘Oh, let me think about that, let me look at some sources,’ but that’s not really what they’re doing,” he says. “If you want to be responsible, you have to understand every answer is going to sound equally confident. Answers can look very credible, but might be a complete hallucination.”

Dr. Muyskens is leading these projects, with the help of summer research fellows. Fischbach stepped up and has been fully funding Real Good AI himself. Dr. Muyskens isn’t taking a paycheck yet, but Real Good AI has brought in other team members, including the aforementioned fellows, plus more data scientists and an outreach director. Fischbach, Bob, Bob’s mother Diane Muyskens, and veteran music teacher David Bell all sit on the board.

Staffing is tough, Fischbach says.

“Setting up a charity is a very difficult thing. We cannot beat the big giants who are pumping a million billion dollars into this,” he explains. “You have Google paying tens of millions of dollars, offering compensation packages for AI. That’s what we’re fighting against. I often make jokes about being Mr. Big Dick Moneybags over here, but I can’t fight that. You have to find people willing to say no to those big paydays, and this is the hottest employment field right now.”

The people who have joined Real Good AI “have taken a hellacious pay cut to be part of this team, because they believe in it,” he says. “It’s a funny thing, being this tiny speck of a nonprofit versus the mountains of these other companies. But I’m happy to fund it myself, because I believe they’re right. I 100% believe they’re on the right track and that the mission is strong.”

That mission is now getting more support, thanks to recent controversy. After Fischbach mentioned using ChatGPT in an episode of Distractible, his podcast with Bob and their mutual friend/creative collaborator Wade Barnes, viewers were quick to pounce, accusing him of promoting generative LLMs. Fischbach held a livestream afterward to clarify that he was criticizing modern dependence on LLMs.

“What was good about people raising a stink is that it gave me an opportunity to talk about [Real Good AI], to draw some attention and get some donations,” he says.

The attention, Bob adds, has been “overwhelming,” with people wanting to volunteer and organizations reaching out for partnership–and, crucially, to supply funding. Right now, Real Good AI is trying to communicate more about its mission (some people reaching out are asking what kind of chatbot the org is building, a fundamental misunderstanding of what’s going on here) and continue to bring in more researchers.

“We’re at a point in the broader climate of science and research where science is kind of losing the fight. Science is being broadly defunded,” Bob says. “We’re just trying to keep that fire alive and be a beacon of good research, good moral practices, good ethical practices, and try to fight the fight.”

Subscribe for daily Tubefilter Top Stories

Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.

Subscribe