logo
How Dr. David L. Hawk Has Made a Career Leading Ethical Business Practices

How Dr. David L. Hawk Has Made a Career Leading Ethical Business Practices

USA Today13-02-2025

Chris Gallagher
Contributor
Hear this story
David L. Hawk, PhD, has made a notable career in directorship at the Center for Corporate Rehabilitation via advising and leading businesses in ethical practices, and reconnecting to nature. His mission has focused on the development of unusual businesses for approaching changes.
Beginning his education at Iowa State University, Dr. Hawk earned a bachelor's degree in engineering in 1971. He then worked on and received a master's degree in architecture and then in city planning from the University of Pennsylvania in 1974.
Dr. Hawk's turning point in life came in 1968 near the end of his service with a helicopter squadron in Vietnam. From seeing how humans treat each other and nature, he later began a doctorate in system science and international business at The Wharton School, University of Pennsylvania. Completing his PhD in 1979, the dissertation reported on his thesis that business as usual was creating environmental deterioration that would lead to climate change conditions. With the help of several industries, he demonstrated how and why humans must change.
Dr. Hawk had conducted much of his doctoral research in Sweden. While living there, he co-founded and became a visiting professor at the Institute of International Business at the Stockholm School of Economics. His environmental project launched the Institute's research via an extensive study of the relationship between business practices and environmental deterioration. Heads of 20 firms and six governments took an active role in this project. They came to strongly support his concerns relative to the global spread of environmental deterioration from retention of business-as-usual ideas. Sweden's prime minister subsequently presented Dr. Hawk's work to the OECD.
Dr. Hawk established and managed the Foundation of the Eternal Feminine, a nonprofit dedicated to the worldwide education of women for leadership roles. Meanwhile, Dr. Hawk continued with his central inspiration, nature, to lead himself and others towards a pattern that would create and support unusual business that could support and enrich natural life.
Since 2005, Dr. Hawk has sponsored research and international corporate retreats on his 1,500-acre farm in Fairfield, Iowa. Until he recently turned it over to a trusted friend, he had done the same on his eight-acre farm in Mt. Olive, New Jersey. Both locations seek to illustrate the connection between professional work and nature. The New Jersey site holds an 1836 stone house once occupied by a man who illustrated the Wizard of Oz books and stories.
Before Dr. Hawk's recent work at the Center for Corporate Rehabilitation and the Foundation of the Eternal Feminine, he was the senior advisor at China Construction America Inc., where he helped significantly expand their presence and revenue. This was done via helping the company adapt to climate change conditions, showing dedication to protection of nature, and seeking innovation at all levels of the corporation. He previously wrote a guidebook on IT business futures for IBM's CEO while serving as an adviser.
Beyond his corporate work, Dr. Hawk authored six books and numerous journal articles. Through this work, he encourages others to use mathematics, science, poetics and humor to categorize context in terms of its 'dimensionality.' The framework uses five dimensions of thinking to encourage people to question authorities, orders, views of nature (or lack thereof) and governance of self. "I put this together to help people, particularly non-university, even anti-university people, to understand what they think and say." Dr. Hawk notes. He promotes thinking for oneself and assuming moral responsibility for results in all affairs, including work in professional settings.
Dr. Hawk's early book, "Too Early, Too Late, Now What?', was placed in the Library of Congress, and is widely noted. A more recent book, 'Humans are F---ed!', was released in 2023. The winner of a prize, the book tapped into growing concern over the future of life on Earth.
Additionally, Dr. Hawk was a commissioner for net neutrality, where he championed equal access to the internet as an essential service for accessible knowledge to people worldwide. In 2004, he was awarded the International Professor of the Year by IBM.
Dr. Hawk promotes a change of mind in corporate settings, encouraging individuals to support self-governance. His work demonstrates a passion for individuality and selflessness beyond material desires. As a thought leader, author and director, Dr. Hawk's efforts toward a better world for all are evident in his compassionate and thoughtful work. Many of his past students have become leaders in their industries and organizations. Of the 10,000 souls that were his former students, he remains in contact without about 1,000, continuing to learn from their lives.
About Marquis Who's Who®:
Since 1899, when A. N. Marquis printed the First Edition of Who's Who in America®, Marquis Who's Who® has chronicled the lives of the most accomplished individuals and innovators from every significant field, including politics, business, medicine, law, education, art, religion and entertainment. Who's Who in America® remains an essential biographical source for thousands of researchers, journalists, librarians and executive search firms worldwide. The suite of Marquis® publications can be viewed at the official Marquis Who's Who® website, www.marquiswhoswho.com.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

How to dismiss a high-profile employee without a Trump-Musk-style meltdown
How to dismiss a high-profile employee without a Trump-Musk-style meltdown

Business Insider

timea day ago

  • Business Insider

How to dismiss a high-profile employee without a Trump-Musk-style meltdown

Star talent can be hard to retain — and even harder to let go. The public fallout between President Donald Trump and Elon Musk this week may be an extreme example of a hotshot's exit going off the rails, but leadership experts said it underscores just how dicey it can be to part ways with a high-profile team member. "These are folks with big egos," Peter Cappelli, a management professor at the University of Pennsylvania's Wharton School, told Business Insider. "Most of the time they end up in court." Saying goodbye to a prominent employee doesn't have to be dramatic. But don't assume a beefy severance package and a non-disparagement agreement are enough to leave a company unscathed. "If people want to hurt you, they'll find a way to do it," Cappelli said. "Ask divorced couples." How to sever ties with a high-profile recruit When pushing out a high-flyer, employers should frame the person's departure as business as usual, said Ronald Placone, a communications professor at Carnegie Mellon University's Tepper School of Business "You try to normalize it," he said. "Things happen, people move on." Trump initially followed conventional wisdom in how he went about booting Musk from Washington last month. The president orchestrated a warm and fuzzy public send-off, thanking Musk for his service and providing a sensible explanation for his departure—in this case, that the billionaire was going back to focusing on his work at the multiple companies he helms. More common explanations are that the fired individual has decided to pursue other career opportunities, spend time with family, or engage in philanthropic endeavors. This tactic is aimed at protecting both the departee's reputation and that of the employer showing him or her the door. "They come up with a story," said Anna A. Tavis, chair of the human capital management department at New York University's School of Professional Studies. The goal is to avoid hurting the outgoing hotshot's chances of landing a new gig and the company's ability to find a replacement. "It's a question of, how do we save face?" she said. Give people something else to talk about Employers should also aim to draw people's attention elsewhere, Placone said. "One of Trump's strategies that often works is you just flood communication channels with other stuff, stuff you perceive is more favorable to your organization," he said. "You try to take some control by giving as many potential stories as possible so people don't home in on one." Trump did make some big announcements this week, including travel bans on several African countries, but leadership experts say the president also erred by openly rebuking Musk's harsh criticism of his signature tax bill on X. This kicked off the back-and-forth squabble that captured the world's attention on Thursday. "There's no need for that," Placone said. "In these high-profile situations, you want to say as little as possible. You don't want to add weight to the argument the other is putting forth." If Trump instead kept quiet, Musk would have been more likely to stick with critiquing the bill rather than upping the ante by accusing the president of illicit behavior, he said. "It would've eventually fizzled out," Placone said. Why some A-list hires don't last Employers most commonly end up quickly sacking flashy new recruits because they aren't as talented as advertised or they insist on working in a way that doesn't align with a company's culture, Tavis said. It even happens at the very top of the corporate ladder. For example, in recent years, the chief executives of Barnes & Noble, Starbucks, and CNN were pushed out of their jobs after brief tenures. "A lot of times they're overestimating their value," she said of people with a reputation for being above the fray, adding that due to the current tight labor market, notable departures are likely to increase. Sam Faycurry, CEO of artificial-intelligence and nutrition startup Fay in San Francisco, can relate. Last year, he hired a well-known rainmaker after a lengthy courtship only to quickly conclude that the person wasn't a good fit. To avoid bad blood, Faycurry said he tried making it seem as if it was the individual's decision to leave by pointing out how much they disagreed on core principles. "This person ended up exiting themselves" without any hard feelings, Faycurry said, adding that he was relieved because his main concern was being able to refill the position with a better-aligned A-list professional. "If the person is influential in a talent pool you want to recruit people from in the future, there's no benefit to having a relationship fall out," Faycurry said. "You're never truly parting ways."

AI Can't Replace Education
AI Can't Replace Education

Yahoo

timea day ago

  • Yahoo

AI Can't Replace Education

Credit - Tingting Ji—Getty Images As commencement ceremonies celebrate the promise of a new generation of graduates, one question looms: will AI make their education pointless? Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company's computer code. NVIDIA's Jensen Huang has even declared coding itself obsolete. While Bill Gates admits the breakneck pace of AI development is 'profound and even a little bit scary,' he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring. Despite the hype, AI cannot 'think' for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world? AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability. Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. 'When they know the answer, they'll tell it to you, and when they don't know the answer they're really good at bullsh*tting,' he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a 'bullsh*tting' chatbot. That, to me, perfectly captures AI's so-called 'knowledge.' It mimics understanding by predicting word sequences but lacks the conceptual grounding. That's why 'creative' AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method. That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us. I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop. Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what's wrong, what's causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor's location and size within a few millimeters, grounded entirely in how cancer disrupts the body's heat signature. The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to 'call bullsh*t' when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans' greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process. The real threat isn't that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn't make sense. Fortunately, the future doesn't have to play out like this. We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations. After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work 'under the hood' by learning the systems themselves, rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding. This approach is necessary today. We don't need more users clicking 'generate' on black-box models. We need people who can understand the AI's logic, its code and math, and catch its 'bullsh*t.' AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter. The choice is not whether to reject or embrace AI. It's whether we'll stay educated and smart enough to guide it. Contact us at letters@

AI Can't Replace Education—Unless We Let It
AI Can't Replace Education—Unless We Let It

Time​ Magazine

timea day ago

  • Time​ Magazine

AI Can't Replace Education—Unless We Let It

As commencement ceremonies celebrate the promise of a new generation of graduates, one question looms: will AI make their education pointless? Many CEOs think so. They describe a future where AI will replace engineers, doctors, and teachers. Meta CEO Mark Zuckerberg recently predicted AI will replace mid-level engineers who write the company's computer code. NVIDIA's Jensen Huang has even declared coding itself obsolete. While Bill Gates admits the breakneck pace of AI development is 'profound and even a little bit scary,' he celebrates how it could make elite knowledge universally accessible. He, too, foresees a world where AI replaces coders, doctors, and teachers, offering free high-quality medical advice and tutoring. Despite the hype, AI cannot 'think' for itself or act without humans—for now. Indeed, whether AI enhances learning or undermines understanding hinges on a crucial decision: Will we allow AI to just predict patterns? Or will we require it to explain, justify, and stay grounded in the laws of our world? AI needs human judgment, not just to supervise its output but also to embed scientific guardrails that give it direction, grounding, and interpretability. Physicist Alan Sokal recently compared AI chatbots to a moderately good student taking an oral exam. 'When they know the answer, they'll tell it to you, and when they don't know the answer they're really good at bullsh*tting,' he said at an event at the University of Pennsylvania. So, unless a user knows a lot about a given subject, according to Sokal, one might not catch a 'bullsh*tting' chatbot. That, to me, perfectly captures AI's so-called 'knowledge.' It mimics understanding by predicting word sequences but lacks the conceptual grounding. That's why 'creative' AI systems struggle to distinguish real from fake, and debates have emerged about whether large language models truly grasp cultural nuance. When teachers worry that AI tutors may hinder students' critical thinking, or doctors fear algorithmic misdiagnosis, they identify the same flaw: machine learning is brilliant at pattern recognition, but lacks the deep knowledge born of systematic, cumulative human experience and the scientific method. That is where a growing movement in AI offers a path forward. It focuses on embedding human knowledge directly into how machines learn. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) are examples. The names might sound technical, but the idea is simple: AI gets better when it follows the rules, whether they are laws of physics, biological systems, or social dynamics. That means we still need humans not just to use knowledge, but to create it. AI works best when it learns from us. I see this in my own work with MINNs. Instead of letting an algorithm guess what works based on past data, we program it to follow established scientific principles. Take a local family lavender farm in Indiana. For this kind of business, blooming time is everything. Harvesting too early or late reduces essential oil potency, hurting quality and profits. An AI may waste time combing through irrelevant patterns. However, a MINN starts with plant biology. It uses equations linking heat, light, frost, and water to blooming to make timely and financially meaningful predictions. But it only works when it knows how the physical, chemical, and biological world works. That knowledge comes from science, which humans develop. Imagine applying this approach to cancer detection: breast tumors emit heat from increased blood flow and metabolism, and predictive AI could analyze thousands of thermal images to identify tumors based solely on data patterns. However, a MINN, like the one recently developed by researchers at the Rochester Institute of Technology, uses body-surface temperature data and embeds bioheat transfer laws directly into the model. That means, instead of guessing, it understands how heat moves through the body, allowing it to identify what's wrong, what's causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one case, a MINN predicted a tumor's location and size within a few millimeters, grounded entirely in how cancer disrupts the body's heat signature. The takeaway is simple: humans are still essential. As AI becomes sophisticated, our role is not disappearing. It is shifting. Humans need to 'call bullsh*t' when an algorithm produces something bizarre, biased, or wrong. That isn't just a weakness of AI. It is humans' greatest strength. It means our knowledge also needs to grow so we can steer the technology, keep it in check, ensure it does what we think it does, and help people in the process. The real threat isn't that AI is getting smarter. It is that we might stop using our intelligence. If we treat AI as an oracle, we risk forgetting how to question, reason, and recognize when something doesn't make sense. Fortunately, the future doesn't have to play out like this. We can build systems that are transparent, interpretable, and grounded in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can train students who blend domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to stay true to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations. After more than a decade of teaching university-level statistics and scientific modeling, I now focus on helping students understand how algorithms work 'under the hood' by learning the systems themselves, rather than using them by rote. The goal is to raise literacy across the interconnected languages of math, science, and coding. This approach is necessary today. We don't need more users clicking 'generate' on black-box models. We need people who can understand the AI's logic, its code and math, and catch its 'bullsh*t.' AI will not make education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter. The choice is not whether to reject or embrace AI. It's whether we'll stay educated and smart enough to guide it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store