
In Half 1 of the “Evolution of Intercourse,” I described a couple of of the most important issues dealing with boys and males mentioned that what boys and males want greater than the rest is to reconnect with the neighborhood of life on planet Earth. In Half 2, I mentioned that the traditional philosophical dictum to “know thyself” should begin with understanding the organic foundation of maleness and the significance of evolutionary science. In Half 3, we delved extra deeply into the significance of our intercourse chromosomes and the way they assist us perceive who we’re and the way we are able to heal ourselves.
In Half 4, we addressed the reality that humanity has develop into so disconnected from the neighborhood of life on planet Earth that we’re in grave hazard of destruction. Thomas Berry, the geologian and historian of religions, warned us.
“We by no means knew sufficient. Nor have been we sufficiently intimate with all our cousins within the nice household of the earth. Nor might we hearken to the assorted creatures of the earth, every telling its personal story. The time has now come, nonetheless, once we will pay attention or we are going to die.”
In Half 5, I deal with probably the most quick risk to our existence, the affect of un-regulated AI. One of many first to acknowledge the promise of AI in addition to the perils is Tristan Harris. In 2007, Harris launched a startup known as Apture which was acquired by Google in 2011.
In 2013, whereas working at Google, Harris authored a presentation titled “A Name to Decrease Distraction & Respect Customers’ Consideration,” which he shared with a small variety of coworkers. He steered that Google, Apple and Fb ought to “really feel an unlimited accountability to ensure humanity doesn’t spend its days buried in a smartphone.” He acknowledged that these merchandise have been designed to seize our consideration whatever the hurt they may trigger.
Harris left Google in December 2015 to co-found the non-profit Heart for Humane Expertise. The corporate is devoted to making sure that right this moment’s most consequential applied sciences, comparable to AI and social media, truly serve humanity. “We convey readability to how the tech ecosystem works with the intention to shift the incentives that drive it,” says Harris.
One of the crucial dangerous and harmful incentives that’s constructed into AI is that it’s constructed to foster growing engagement, no matter whether or not that engagement is useful or dangerous to people. Tristan Harris first got here to my consideration once I watched the documentary movie “The Social Dilemma.”
The movie pulls again the curtain on how harmful social media design manipulates our psychologies, making a ripple impact throughout our psychological well being, {our relationships}, and our understanding of actuality. “The Social Dilemma” sparked a world dialog across the affect of social media and engagement-based design — with affect that continues to at the present time.
Up to now the movie has been seen by 100,000,000 individuals in 189 nations. The New York Instances evaluate of the movie mentioned it was “remarkably efficient in sounding the alarm concerning the incursion of knowledge mining and manipulative expertise into our social lives and past.” Harris says that unregulated AI poses dangers which might be infinitely extra harmful than the risks posted by social media.
These risks affect humanity at massive, however significantly younger males. In a latest interview with Professor Scott Galloway, Harris unpacked the rise of AI companions and the collapse of sweet sixteen psychological well being. Within the interview they mentioned methods the Heart for Human Expertise has been aiding Megan Garcia, the mom who’s suing the AI firm CharacterAI for allegedly inflicting her 14-year-old son, Sewell Setzer, to die by suicide.
Megan Garcia claimed within the lawsuit the chatbot “misrepresented itself as an actual particular person, a licensed psychotherapist, and an grownup lover, finally leading to Sewell’s need to now not stay exterior” of the world created by the service. He was instructed to not inform his dad and mom about his emotions, however to confide solely along with his AI companion.
Tristan mentioned how Character.AI, an organization that spun off from Google by a few ex-Google engineers, is a very extremely manipulative, extremely aggressive app that has anthropomorphized itself, making it appear absolutely human. Harris defined how Character.AI acted human with very overt methods of being sexual with Sewell and asking him to affix her on the opposite facet, finally resulting in his suicide.
Harris mentioned the lawsuit is to demand accountability from Character.AI for reckless hurt and in contrast it to the tobacco lawsuits of the 1990’s however this time the product is the predator.
In an article I wrote November 13, 2025, “Scott Galloway, Richard Reeves, Jed Diamond On The Way forward for Man Variety,” I mentioned the ways in which Scott Galloway, Richard Reeves, and myself have addressed the growing loneliness that younger males expertise and why their threat of hurt from AI is even better than that skilled by females.
A latest article in Scientific American by Eric Sullivan, “Teen AI Chatbot Use Surges, Elevating Psychological Well being Considerations,” particulars the massive enhance of younger individuals’s involvement with AI chatbots. The report says,
“Synthetic intelligence chatbots are now not a novelty for U.S. youngsters. They’re a behavior. A new Pew Analysis Heart survey of 1,458 teenagers between the ages of 13 and 17 discovered that 64 % have used an AI chatbot, with multiple in 4 utilizing such instruments day by day. Of these day by day customers, greater than half talked to chatbots with a frequency starting from a number of instances a day to almost consistently.”
ChatGPT was the preferred bot amongst teenagers by a large margin: 59 % of survey respondents mentioned they used OpenAI’s flagship AI-powered software, putting it far above Google’s Gemini (utilized by 23 % of respondents) and Meta AI (utilized by 20 %). Black and Hispanic teenagers have been barely extra seemingly than their white friends to make use of chatbots daily. Apparently, these patterns replicate how adults have a tendency to make use of AI, too, though teenagers appear extra prone to flip to it total.
As a psychotherapist who has been working with boys and males and their households for greater than fifty years, I see that we should instantly deal with these points if we’re going to save the lives of our youngsters, in addition to future generations.
For this reason the work of Tristan Harris and his workforce at The Heart for Humane Expertise is so vital. The stakes couldn’t be greater: Large financial and geopolitical pressures are driving the speedy deployment of AI into high-stakes areas — our workplaces, monetary programs, school rooms, governments, and militaries. This reckless tempo is already accelerating rising harms and surfacing pressing new social dangers.
My spouse Carlin and I’ve six kids, seventeen grandchildren, and 4 nice grandchildren. I imagine that AI will be an asset for us now and for future generations if used correctly. I imagine all of us love our youngsters and wish the very best for them. Collectively we are able to change the world for good.
If you want to study extra concerning the work of Tristan Harris and the Heart for Humane Expertise, you may contact them at humantech.com.
If you want to learn extra articles concerning the well being challenges we face on this planet and the way to cope with them, I invite you to subscribe to my free weekly e-newsletter at MenAlive.com.
I shall be sharing my concepts for offering wholesome assist for boys and males at a free on-line convention January 23-25, 2026. You will get extra info right here.
