I. The Founder
Sol Kennedy used to ask his assistant to read the messages his ex-wife sent him. After the couple separated in 2020, Kennedy says, he found their communications “tough.” An email, or a stream of them, would arrive—stuff about their two kids mixed with unrelated emotional wallops—and his day would be ruined trying to reply. Kennedy, a serial tech founder and investor in Silicon Valley, was in therapy at the time. But outside weekly sessions, he felt the need for real-time support.
After the couple’s divorce, their communications shifted to a platform called OurFamilyWizard, used by hundreds of thousands of parents in the United States and abroad to exchange messages, share calendars, track expenses. (OFW keeps a time-stamped, court-admissible record of everything.) Kennedy paid extra for an add-on called ToneMeter, which OFW touted at the time as “emotional spellcheck.” As you drafted a message, its software would conduct a basic sentiment analysis, flagging language that could be “concerning,” “aggressive,” “upsetting,” “demeaning,” and so on. But there was a problem, Kennedy says: His co-parent didn’t seem to be using her ToneMeter.
Kennedy, ever the early adopter, had been experimenting with ChatGPT to “cocreate” bedtime stories with his kids. Now he turned to it for advice on communications with his ex. He was wowed—and he wasn’t the first. Across Reddit and other internet forums, people with difficult exes, family members, and coworkers were posting with shock about the seemingly excellent guidance, and the precious emotional validation, a chatbot could provide. Here was a machine that could tell you, with no apparent agenda, that you were not the crazy one. Here was a counselor that would patiently hold your hand, 24 hours a day, as you waded through any amount of bullshit. “A scalable solution” to supplement therapy, as Kennedy puts it. Finally.
But fresh out of the box, ChatGPT was too talkative for Kennedy’s needs, he says—and much too apologetic. He would feed it tough messages, and it would recommend replying (in many more sentences than necessary) I’m sorry, please forgive me, I’ll do better. Having no self, it had no self-esteem.
Kennedy wanted a chatbot with “spine,” and he thought that if he built it, a lot of other co-parents might want it too. As he saw it, AI could help them at each stage of their communications: It could filter emotionally triggering language out of incoming messages and summarize just the facts. It could suggest appropriate responses. It could coach users toward “a better way,” Kennedy says. So he founded a company and started developing an app. He called it BestInterest, after the standard that courts often use for custody decisions—the “best interest” of the child or children. He would take those off-the-shelf OpenAI models and give them spine with his own prompts.
Estranged partners end up fighting horribly for any number of reasons, of course. For many, perhaps even most, things cool down after enough months have gone by, and a tool like BestInterest might not be useful long-term. But when a certain kind of personality is in the mix—call it “high-conflict,” “narcissistic,” “controlling,” “toxic,” whatever synonym for “crazy-making” you tend to see cross your internet feed—the fighting about the kids, at least from one side, never stops. Kennedy wanted his chatbot to stand up to these people, so he turned to the one they may hate most: Ramani Durvasula, a Los Angeles–based clinical psychologist who specializes in how narcissism shapes relationships.
Dr. Ramani, as she is known to her millions of online fans, has earned a devoted following from her books, podcasts, video channels, and members-only healing program for survivors of what she terms “narcissistic abuse.” (Naturally, in the /r/NPD community on Reddit, for people with narcissistic personality disorder, Dr. Ramani’s name inspires rage and disgust.) She was immediately “fascinated” by Kennedy’s idea, she says, and signed on as an adviser. The two went back and forth by text and email about the beta versions. Kennedy would send her outputs, and she’d help him fine-tune the tone. (Dr. Ramani holds a financial stake in BestInterest but isn’t an investor.)
At first she found the machine “too scoldy,” she says—too eager to tell the “harmed partner” how poorly they communicated. Users of this app, Dr. Ramani explains, are people who “would love a one-day break” from the high-conflict maelstrom; they don’t need more blame in their lives. Eventually, she says, Kennedy was able to mold the chatbot into what she hopes will be a “real-time teacher that will push the hand of radical acceptance.” Personalities rarely change. Why not let an app absorb the world’s jerk energy and spare everyone else the cortisol?
II. The User
“I have hope for humanity,” an AI researcher tells me. As for hope that one particular human—her ex-husband and co-parent—will “get the help that he needs”? Not so much.
When we spoke, the AI researcher requested that WIRED not publish her name or identifying details because she is concerned about repercussions for her two school-aged children. She believes their father has narcissistic personality disorder. When she made the decision to leave him, she says, “I had deteriorated emotionally to a point where I just didn’t know who I was.” She had seen therapists, she says, “but none of them I felt were really trained in helping to coach people who were dealing with narcissistic spouses.” Then, she says, “looking online for resources, I stumbled upon Dr. Ramani.” That’s how she got psychoeducated.
What does one learn from the internet’s professors of narcissism? “The nature of a person with narcissistic personality disorder is that they desire narcissistic supply,” the AI researcher tells me. On the outside they might seem sparkly and self-assured, but inside is a howling black hole with a bottomless need for attention. That’s why Dr. Ramani says that if you can’t “go no-contact” with the narc—say, because you share kids, siblings, parents, friends—the next best option is to “gray-rock” them: to deny them supply by being as boring as a rock in all your interactions with them. (If a little more color feels appropriate, you can “yellow-rock” them instead.)
Eventually, the AI researcher tells me, her ex may realize that she’s “not a source of his supply any longer.” For the time being, though, she’s using BestInterest to filter her co-parent’s messages and help with her replies. The app assigned her a new phone number, which she gave only to her ex, and he texts her there. “No matter what he sends me, it filters everything—every single thing—in a manner that is peaceful,” she says. “So if he says, ‘You’re a complete idiot, I can’t believe you did this, you’re sick, I can’t stand you, you’re disgusting, I hate you—and will you get the kids at 3?’ the only thing the app says to me is ‘He’s upset, and he wants to know, will you get the kids at 3?’”
In a case like this, BestInterest seems to be doing what Kennedy envisioned—“creating space between your reactivity and your action,” as he put it (in fluent therapy-speak). Still, he says, it’s important that users of the app “go in and see the original message” when they’re “feeling more resourced,” to make sure the AI didn’t miss anything important, whether that’s substantive information about the kids or a wild accusation that an attorney should see. “I don’t imagine anyone’s going to just ignore it,” Kennedy says. No one wants to stand up at a custody hearing to explain to the judge what they meant by a reply they didn’t write to a message they didn’t read and how that’s in their kids’ best interests.
So does the AI researcher go back later to regularly review her ex’s original messages? “Technology is technology, and it breaks,” she acknowledges. “But I don’t at this point feel the need to look.” If she were headed back to court? Sure.
III. The Competition
Earlier this year, about seven months after Sol Kennedy launched BestInterest, OurFamilyWizard rolled out an upgrade to ToneMeter. The company ditched its 2010s-era sentiment-analysis tech in favor of a large language model. ToneMeter AI will still tell you if the words you’re using are too “Negative”—but for those who opt in, it will also generate alternative phrasings.
Nick VanWagner, the CEO of OurFamilyWizard’s parent company, says tons of users were already going off-platform to seek help from general-purpose chatbots. But it was OFW, based in Minneapolis, Minnesota, that had the training data to build for “a very pointed use case to solve a very precise need,” he says. To create ToneMeter AI, says Larry Patterson, the company’s chief technical officer, his team “came up with kind of a scorecard” for the clarity and “tone-consciousness” they wanted to see in its eventual messages. Then they amassed a training set of about 10,000 real messages (anonymized for user privacy) that met those criteria. “We went through a supervised fine-tuning exercise, with our own technology inside of our control,” Patterson says. “It never left our walls, so to speak.” The system uses a few variations on an open source model. One, called Lighthouse, does the sentiment analysis. Another, called Harbor, generates text. A third model, which Patterson calls an “LLM judge,” reviews the output to make sure it matches the company’s criteria. VanWagner says the goal of using AI isn’t to “take the thinking out of” co-parenting but rather to “train better communication and help you take a step back.”
OFW’s ToneMeter, even with AI, still relies on the dubious proposition that a high-conflict co-parent will actually heed its suggestions. Dr. Ramani is quick to point out a similar weakness in BestInterest: If you want the full benefit of filtered messages, you need your co-parent to contact you through your app-assigned phone number—and getting a high-conflict co-parent to agree to that is going to be its own huge, probably unwinnable conflict, Dr. Ramani says. Users may end up manually copy-pasting their ex’s messages into the app (just as many of them may already have been doing with ChatGPT).
While Dr. Ramani says that “the people who pull the levers” in the tech industry are “psychopathic,” she’s not mad at this latest invention of theirs. In fact, she thinks something like emotional spellcheck should be a built-in feature of smartphones. Imagine an AI filter “stripping away the toxicity” from noxious family group chats, she says. Imagine an AI dating sidekick that alerts you to manipulative texts. Imagine your own Dr. Ramani “living in your phone,” she says, “the sane angel sitting on your shoulder.” It would certainly solve the copy-paste problem.
For now, though, let’s say you want to send your co-parent a message that says: “You’re a complete idiot, I can’t believe you did this, you’re sick, I can’t stand you, you’re disgusting, I hate you—and will you get the kids at 3?” When you type that into OFW, the Lighthouse model will flag it as “Negative,” and the Harbor model will generate a more appropriate alternative: “I’m concerned about the situation. Can you please get the kids at 3?” You can accept it, generate another, or ignore ToneMeter altogether.
When you type that same message into BestInterest, the AI coach will tell you that your “intense emotions” are “understandable” and “valid.” Then it will say that “sending a message with such strong personal attacks, while it might feel cathartic in the moment, is likely to significantly escalate conflict.” Then it will ask: “How does it feel to consider a response that strips away the emotional content and focuses solely on the practical question?” Perhaps something like: “Can you get the kids at 3?”
Well, human, how does it feel?


