Experts issue warning as dangerous wasp-like insects spread in US — here's what you need to know
The News Tribune reported that the "wasp-like insects" have been observed laying eggs in pine needles and feeding on pine bushes throughout the county. While the bugs are more of a nuisance than a threat to nature, they can leave huge bare patches on trees, which can detract from the area's natural beauty, according to Todd Murray, an entomologist and director of Washington State University's research center.
Murray explained that the insects are native to Europe and were identified in Washington state in 2008. Luckily, adult sawflies are weak fliers, so their spread throughout the United States and Canada has been relatively slow as they rely on the movement of infested nursery stock, soil, and other materials, per the Government of Canada.
But if you have pine trees in your backyard, the bugs can make them look unsightly as they chow down on the needles.
Since the bugs have been spreading in a popular tourist area, it could hurt the county's income from recreation and affect visitors' experience when they visit the park.
Although they're not known to cause significant damage to trees in forests, the sawfly may negatively impact ornamental nurseries and Christmas tree farms, as it spreads easily through plants grown in these settings. Additionally, while pine sawfly larvae don't kill the host trees, continuous loss of pine needles can slow their growth, according to the University of Maine.
When invasive species, such as pine sawflies, spread in a region, they disrupt the balance of forest ecosystems and consume resources that native plants and animals need to thrive. As the sawflies feed on trees, it leaves them more susceptible to attack from other insects and diseases. In some cases, invasive bugs can have severe economic consequences, such as in Hawaiʻi, where coconut rhinoceros beetles and little fire ants have caused thousands of dollars in damage to palm and banana trees, taro, and other crops.
Murray told The News Tribune that the best way to get rid of sawflies if you spot them on plants is to either knock them off or use insecticides, although the former is better for your health and the environment. However, birds eat pine sawflies, so the easiest solution is probably to let nature do its thing, unless you have a large infestation that requires intervention.
If you want to do your part in protecting native species and conserving the planet, consider starting a native plant lawn that attracts pollinators and saves money on water and lawn care. Native lawns promote biodiversity and create an environment that supports natural predators of sawflies, thereby helping to manage their populations.
Should we be actively working to kill invasive species?
Absolutely
It depends on the species
I don't know
No — leave nature alone
Click your choice to see results and speak your mind.
Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNN
42 minutes ago
- CNN
FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.
To hear health officials in the Trump administration talk, artificial intelligence has arrived in Washington to fast-track new life-saving drugs to market, streamline work at the vast, multibillion-dollar health agencies, and be a key assistant in the quest to slash wasteful government spending without jeopardizing their work. 'The AI revolution has arrived,' Health and Human Services Secretary Robert F. Kennedy Jr. has declared at congressional hearings in the past few months. 'We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals,' he told the House Energy and Commerce Committee in June. The enthusiasm — among some, at least — was palpable. Weeks earlier, the US Food and Drug Administration, the division of HHS that oversees vast portions of the American pharmaceutical and food system, had unveiled Elsa, an artificial intelligence tool intended to dramatically speed up drug and medical device approvals. Yet behind the scenes, the agency's slick AI project has been greeted with a shrug — or outright alarm. Six current and former FDA officials who spoke on the condition of anonymity to discuss sensitive internal work told CNN that Elsa can be useful for generating meeting notes and summaries, or email and communique templates. But it has also made up nonexistent studies, known as AI 'hallucinating,' or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said. 'Anything that you don't have time to double-check is unreliable. It hallucinates confidently,' said one employee — a far cry from what has been publicly promised. 'AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have' to check for fake or misrepresented studies, a second FDA employee said. Currently, Elsa cannot help with review work , the lengthy assessment agency scientists undertake to determine whether drugs and devices are safe and effective, two FDA staffers said. That's because it cannot access many relevant documents, like industry submissions, to answer basic questions such as how many times a company may have filed for FDA approval, their related products on the market or other company-specific information. All this raises serious questions about the integrity of a tool that FDA Commissioner Dr. Marty Makary has boasted will transform the system for approving drugs and medical devices in the US, at a time when there is almost no federal oversight for assessing the use of AI in medicine. 'The agency is already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets,' the FDA said in a statement on its launch in June. But speaking to CNN at the FDA's White Oak headquarters this week, Makary says that right now, most of the agency's scientists are using Elsa for its 'organization abilities' like finding studies and summarizing meetings. The FDA's head of AI, Jeremy Walsh, admitted that Elsa can hallucinate nonexistent studies. 'Elsa is no different from lots of [large language models] and generative AI,' he told CNN. 'They could potentially hallucinate.' Walsh also said Elsa's shortcomings with responding to questions about industry information should change soon, as the FDA updates the program in the coming weeks to let users upload documents to their own libraries. Asked about mistakes Elsa is making , Makary noted that staff are not required to use the AI. 'I have not heard those specific concerns, but it's optional,' he said. 'They don't have to use Elsa if they don't find it to have value.' Challenged on how this makes the efficiency gains he has publicly touted when staff inside FDA have told CNN they must double-check its work, he said: 'You have to determine what is reliable information that [you] can make major decisions based on, and I think we do a great job of that.' The earliest iterations of Elsa were built from the backbone of an earlier AI model that the FDA had started to work on during the Biden administration, according to two sources familiar with the matter. The name was initially an acronym for Efficient Language System for Analysis and was among several pitches for names for the AI system, like 'RegulAItor.' Elsa eventually won out, though leadership ultimately decided against its longer title: A recent internal document seen by CNN says that now 'Elsa is just a name and is not an acronym.' Walsh and his team demonstrated the AI tool for CNN this week. The platform has a plain white interface with some brown accents. It welcomes the user with 'How can I help you?' above an entry field that says 'Ask Elsa anything,' much like other popular publicly used AI. The FDA has said that Elsa is designed to let regulators tap into secure internal documents, shortening reviews by quickly summarizing risky side effects and pulling in information about related products. During the demonstration, Elsa was asked to summarize the FDA's guidance on fatty liver disease and medicines that treat it. It pulled up the 10 papers from an internal FDA library that it said were the most relevant. When it was adopted in June, Makary boasted that Elsa's rollout had come 'ahead of schedule and under budget' after 'a very successful pilot program with FDA's scientific reviewers.' Walsh said those efforts came together in a matter of weeks. The agency leadership chose staff from across its various centers overseeing drugs, devices, food and animal medicines for a series of meetings in May. There, they gave feedback about what they needed from such a tool, potential challenges they saw and even some aesthetic choices, like Elsa's color palette and its name, according to an FDA employee who participated. Those who participated in the feedback meetings were dubbed Elsa 'champions' and sent to evangelize the platform in their various corners of the agency, with talking points and suggestions about how to demonstrate its use, according to two current FDA staff. Agency training on Elsa is voluntary, as is using the platform at all. Makary and Walsh told CNN that more than half of FDA staff have logged time in Elsa. But those who spoke to CNN said that the adoption has been weak in their areas of the agency — not many of their colleagues are using Elsa, or they are using it only on a very limited basis. Those who have used it say they have noticed serious problems. For example, it cannot reliably represent studies. If Elsa gives a one-paragraph summary of, say, 20 pages of research tied to a particular new drug, there is no way to know whether it misrepresents something or misses something that a human reviewer would have considered important, one FDA employee said. There is no way for Elsa to know what information from a lengthy study could be the most crucial for an expert, this employee believes. When Elsa is told it is incorrect — that a study it cites does not exist or that someone works at the FDA when they don't — it is usually 'apologetic,' one employee said. But in at least one instance shared with CNN — when that employee asked Elsa to generate something for a project — it insisted that research area was not in FDA's purview (it was). Employees who spoke to CNN have tested Elsa's knowledge by asking it questions like how many drugs of a certain class are authorized for children to use or how many drugs are approved with a certain label . In both cases, it returned wrong answers. One employee described Elsa miscounting the number of products with a particular label. When told it was wrong, the AI admitted that it made a mistake. 'But it still doesn't help you to answer the question,' that employee said. The algorithm then reminds users that it is only an AI assistant and they need to verify its work. Asked about errors, in addition to the hallucinations, Walsh said: 'Some of those responses don't surprise me at all. But what's important is … how we address those gaps in the capability' of Elsa and its users. Those include trainings and new features like the personal document libraries that will launch soon, he added. Walsh also said that a current feature of Elsa, where users can click over its summaries to see which parts of a document Elsa has cited, can act as a check to make sure it did not fabricate a study. However, this now applies only when Elsa is being used to pull internal documents. As of now, it cannot link to, for example, articles in a medical journal. And knowing whether those sources are, in fact, the most important is also up to the user and how they ask the questions, Walsh said. He also contended that the problem of Elsa's hallucinations can be mitigated by asking it more precise questions. Elsa is also improving, he insists. 'We're also seeing as the AI models get better, right, feedback gets better.' Talk of integrating artificial intelligence into US health agencies' work had been underway for some time before the second Trump administration jump-started efforts, but the speed with which Elsa came into use was unusual. Some experts would pinpoint the government's efforts to develop AI plans in earnest in 2018, when the Pentagon began evaluating its potential for national security. Part of that project was about looking into its use in health care too, said Dr. Hassan Tetteh, a thoracic surgeon and former US Navy captain who worked on the project in 2020. There were also early efforts from that Pentagon-led group to talk with international allies about AI standards and regulations, he added. In Europe, countries have worked together to stand up AI safeguards. In 2024, the European Union approved and implemented the AI Act, a law 'to protect fundamental rights, democracy, the rule of law' around risky AI use, including in health care, while promoting transformational AI models. These standards and protections do not exist in the US. A government working group formed during the Biden administration to look at establishing regulations on AI use, including in health care, was disbanded last year: It's mandate expired and was not renewed. Elsa arrived as Congress wrestled with how to approach laws on AI regulation. Although congressional committees have held hearings about AI risks like biased models and cybersecurity threats, Congress has passed no substantial legislation to regulate AI. In June, a bipartisan group of House members introduced legislation mostly focused on maintaining US dominance in the AI race; later that month, two senators introduced a bill trained on preventing American use of 'adversarial' AI from foreign governments, including China. Other efforts, such as a bill that would require testing and regulatory oversight for high-risk AI systems (much like the European standards), have stalled. An earlier version of the 'One Big Beautiful Bill,' President Donald Trump's expansive tax and spending bill, would have included Congress' first sweeping law on AI: a 10-year moratorium on the enforcement of state regulations of the technology. But the Senate struck the provision down. Trump, who has made AI development and investments a top priority in his second administration, has heralded a bright future for the technology. At an energy summit in Pennsylvania last week, he told attendees: 'We're here today because we believe that America's destiny is to dominate every industry and be the first in every technology, and that includes being the world's number one superpower in artificial intelligence.' Without federal regulations, it is hard to say what that superpower would look like. 'AI does a lot of stuff, but it's not magic,' said Dr. Jonathan Chen, an assistant professor of medicine at Stanford University who has studied the use of AI in clinical settings. It would be great if it could help experts sniff out data falsification or give rigorous analysis on patient safety, but 'those problems are much more nuanced' than what a machine can do, he said. 'It's really kind of the Wild West right now. The technology moves so fast, it's hard to even comprehend exactly what it is.'


CNN
43 minutes ago
- CNN
FDA's artificial intelligence is supposed to revolutionize drug approvals. It's making up nonexistent studies.
To hear health officials in the Trump administration talk, artificial intelligence has arrived in Washington to fast-track new life-saving drugs to market, streamline work at the vast, multibillion-dollar health agencies, and be a key assistant in the quest to slash wasteful government spending without jeopardizing their work. 'The AI revolution has arrived,' Health and Human Services Secretary Robert F. Kennedy Jr. has declared at congressional hearings in the past few months. 'We are using this technology already at HHS to manage health care data, perfectly securely, and to increase the speed of drug approvals,' he told the House Energy and Commerce Committee in June. The enthusiasm — among some, at least — was palpable. Weeks earlier, the US Food and Drug Administration, the division of HHS that oversees vast portions of the American pharmaceutical and food system, had unveiled Elsa, an artificial intelligence tool intended to dramatically speed up drug and medical device approvals. Yet behind the scenes, the agency's slick AI project has been greeted with a shrug — or outright alarm. Six current and former FDA officials who spoke on the condition of anonymity to discuss sensitive internal work told CNN that Elsa can be useful for generating meeting notes and summaries, or email and communique templates. But it has also made up nonexistent studies, known as AI 'hallucinating,' or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said. 'Anything that you don't have time to double-check is unreliable. It hallucinates confidently,' said one employee — a far cry from what has been publicly promised. 'AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have' to check for fake or misrepresented studies, a second FDA employee said. Currently, Elsa cannot help with review work , the lengthy assessment agency scientists undertake to determine whether drugs and devices are safe and effective, two FDA staffers said. That's because it cannot access many relevant documents, like industry submissions, to answer basic questions such as how many times a company may have filed for FDA approval, their related products on the market or other company-specific information. All this raises serious questions about the integrity of a tool that FDA Commissioner Dr. Marty Makary has boasted will transform the system for approving drugs and medical devices in the US, at a time when there is almost no federal oversight for assessing the use of AI in medicine. 'The agency is already using Elsa to accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets,' the FDA said in a statement on its launch in June. But speaking to CNN at the FDA's White Oak headquarters this week, Makary says that right now, most of the agency's scientists are using Elsa for its 'organization abilities' like finding studies and summarizing meetings. The FDA's head of AI, Jeremy Walsh, admitted that Elsa can hallucinate nonexistent studies. 'Elsa is no different from lots of [large language models] and generative AI,' he told CNN. 'They could potentially hallucinate.' Walsh also said Elsa's shortcomings with responding to questions about industry information should change soon, as the FDA updates the program in the coming weeks to let users upload documents to their own libraries. Asked about mistakes Elsa is making , Makary noted that staff are not required to use the AI. 'I have not heard those specific concerns, but it's optional,' he said. 'They don't have to use Elsa if they don't find it to have value.' Challenged on how this makes the efficiency gains he has publicly touted when staff inside FDA have told CNN they must double-check its work, he said: 'You have to determine what is reliable information that [you] can make major decisions based on, and I think we do a great job of that.' The earliest iterations of Elsa were built from the backbone of an earlier AI model that the FDA had started to work on during the Biden administration, according to two sources familiar with the matter. The name was initially an acronym for Efficient Language System for Analysis and was among several pitches for names for the AI system, like 'RegulAItor.' Elsa eventually won out, though leadership ultimately decided against its longer title: A recent internal document seen by CNN says that now 'Elsa is just a name and is not an acronym.' Walsh and his team demonstrated the AI tool for CNN this week. The platform has a plain white interface with some brown accents. It welcomes the user with 'How can I help you?' above an entry field that says 'Ask Elsa anything,' much like other popular publicly used AI. The FDA has said that Elsa is designed to let regulators tap into secure internal documents, shortening reviews by quickly summarizing risky side effects and pulling in information about related products. During the demonstration, Elsa was asked to summarize the FDA's guidance on fatty liver disease and medicines that treat it. It pulled up the 10 papers from an internal FDA library that it said were the most relevant. When it was adopted in June, Makary boasted that Elsa's rollout had come 'ahead of schedule and under budget' after 'a very successful pilot program with FDA's scientific reviewers.' Walsh said those efforts came together in a matter of weeks. The agency leadership chose staff from across its various centers overseeing drugs, devices, food and animal medicines for a series of meetings in May. There, they gave feedback about what they needed from such a tool, potential challenges they saw and even some aesthetic choices, like Elsa's color palette and its name, according to an FDA employee who participated. Those who participated in the feedback meetings were dubbed Elsa 'champions' and sent to evangelize the platform in their various corners of the agency, with talking points and suggestions about how to demonstrate its use, according to two current FDA staff. Agency training on Elsa is voluntary, as is using the platform at all. Makary and Walsh told CNN that more than half of FDA staff have logged time in Elsa. But those who spoke to CNN said that the adoption has been weak in their areas of the agency — not many of their colleagues are using Elsa, or they are using it only on a very limited basis. Those who have used it say they have noticed serious problems. For example, it cannot reliably represent studies. If Elsa gives a one-paragraph summary of, say, 20 pages of research tied to a particular new drug, there is no way to know whether it misrepresents something or misses something that a human reviewer would have considered important, one FDA employee said. There is no way for Elsa to know what information from a lengthy study could be the most crucial for an expert, this employee believes. When Elsa is told it is incorrect — that a study it cites does not exist or that someone works at the FDA when they don't — it is usually 'apologetic,' one employee said. But in at least one instance shared with CNN — when that employee asked Elsa to generate something for a project — it insisted that research area was not in FDA's purview (it was). Employees who spoke to CNN have tested Elsa's knowledge by asking it questions like how many drugs of a certain class are authorized for children to use or how many drugs are approved with a certain label . In both cases, it returned wrong answers. One employee described Elsa miscounting the number of products with a particular label. When told it was wrong, the AI admitted that it made a mistake. 'But it still doesn't help you to answer the question,' that employee said. The algorithm then reminds users that it is only an AI assistant and they need to verify its work. Asked about errors, in addition to the hallucinations, Walsh said: 'Some of those responses don't surprise me at all. But what's important is … how we address those gaps in the capability' of Elsa and its users. Those include trainings and new features like the personal document libraries that will launch soon, he added. Walsh also said that a current feature of Elsa, where users can click over its summaries to see which parts of a document Elsa has cited, can act as a check to make sure it did not fabricate a study. However, this now applies only when Elsa is being used to pull internal documents. As of now, it cannot link to, for example, articles in a medical journal. And knowing whether those sources are, in fact, the most important is also up to the user and how they ask the questions, Walsh said. He also contended that the problem of Elsa's hallucinations can be mitigated by asking it more precise questions. Elsa is also improving, he insists. 'We're also seeing as the AI models get better, right, feedback gets better.' Talk of integrating artificial intelligence into US health agencies' work had been underway for some time before the second Trump administration jump-started efforts, but the speed with which Elsa came into use was unusual. Some experts would pinpoint the government's efforts to develop AI plans in earnest in 2018, when the Pentagon began evaluating its potential for national security. Part of that project was about looking into its use in health care too, said Dr. Hassan Tetteh, a thoracic surgeon and former US Navy captain who worked on the project in 2020. There were also early efforts from that Pentagon-led group to talk with international allies about AI standards and regulations, he added. In Europe, countries have worked together to stand up AI safeguards. In 2024, the European Union approved and implemented the AI Act, a law 'to protect fundamental rights, democracy, the rule of law' around risky AI use, including in health care, while promoting transformational AI models. These standards and protections do not exist in the US. A government working group formed during the Biden administration to look at establishing regulations on AI use, including in health care, was disbanded last year: It's mandate expired and was not renewed. Elsa arrived as Congress wrestled with how to approach laws on AI regulation. Although congressional committees have held hearings about AI risks like biased models and cybersecurity threats, Congress has passed no substantial legislation to regulate AI. In June, a bipartisan group of House members introduced legislation mostly focused on maintaining US dominance in the AI race; later that month, two senators introduced a bill trained on preventing American use of 'adversarial' AI from foreign governments, including China. Other efforts, such as a bill that would require testing and regulatory oversight for high-risk AI systems (much like the European standards), have stalled. An earlier version of the 'One Big Beautiful Bill,' President Donald Trump's expansive tax and spending bill, would have included Congress' first sweeping law on AI: a 10-year moratorium on the enforcement of state regulations of the technology. But the Senate struck the provision down. Trump, who has made AI development and investments a top priority in his second administration, has heralded a bright future for the technology. At an energy summit in Pennsylvania last week, he told attendees: 'We're here today because we believe that America's destiny is to dominate every industry and be the first in every technology, and that includes being the world's number one superpower in artificial intelligence.' Without federal regulations, it is hard to say what that superpower would look like. 'AI does a lot of stuff, but it's not magic,' said Dr. Jonathan Chen, an assistant professor of medicine at Stanford University who has studied the use of AI in clinical settings. It would be great if it could help experts sniff out data falsification or give rigorous analysis on patient safety, but 'those problems are much more nuanced' than what a machine can do, he said. 'It's really kind of the Wild West right now. The technology moves so fast, it's hard to even comprehend exactly what it is.'
Yahoo
3 hours ago
- Yahoo
Male birth control pill passes early safety test, with more trials underway
When you buy through links on our articles, Future and its syndication partners may earn a commission. An experimental, hormone-free male birth control pill has just passed its first safety test in humans. The trial included 16 people and was only intended to test whether the drug reached adequate levels in the body, as well as whether it triggered any serious side effects, such as concerning changes in heart rate, hormone function, inflammation, mood or sexual function. Across the doses tested, no significant side effects were observed. That result tees up the pill to be tested in larger trials that will now look at both safety and efficacy. The results of the early safety trial, published Tuesday (July 22) in the journal Communications Medicine, are a critical first step toward getting the pill approved, Dr. Stephanie Page, an endocrinologist at the University of Washington School of Medicine who wasn't involved in the study, told Scientific American. "We really need more reversible contraceptive methods for men," she added. For the moment, the only male birth control options are condoms and vasectomies. The latter can be reversed, but the actual success rate of the reversal procedure varies widely in terms of how likely a person is to conceive a child afterward. If approved, the new pill would be the first drug in its class. "A safe and effective male pill will provide more options to couples for birth control," Gunda Georg, a professor in the University of Minnesota College of Pharmacy, where the drug molecule was developed, said in a statement released earlier this year following promising preclinical testing of the pill. Columbia University was also involved in the drug's development, along with the company YourChoice Therapeutics, which is overseeing the trials. Related: Why is there still no male birth control pill? "It will allow a more equitable sharing of responsibility for family planning and provide reproductive autonomy for men," Georg said. How the male birth control pill works The experimental pill, called YCT-529, is designed to pause sperm production by interrupting specific signals in the body. Specifically, the drug works by blocking a protein called "retinoic acid receptor alpha," which is known to play a key role in the formation and maturation of sperm. In the testes, the receptor would usually be activated by the insertion of a "key" — a vitamin A metabolite — but the drug stops this key from clicking into place. That, in turn, prevents the chain reaction that ends with sperm being made. Finding a compound with this effect required the scientists to closely examine the structure of the receptor when it's bound to its key, as well as test dozens of molecules to see which could block the interaction. In preclinical tests with male lab mice, the drug "elicited profound effects" on sperm production. It triggered reversible infertility within four weeks of use, showing 99% effectiveness at preventing pregnancy in the female mice the treated males mated with. When taken off the drug, the male mice's fertility was restored within about four to six weeks. Further tests in nonhuman primates returned similar results, with sperm counts falling dramatically within two weeks of starting the drug and fully recovering within 10 to 15 weeks of stopping the drug. These preclinical tests set the stage for the recent clinical trial in people. The trial included 16 males ages 32 to 59, all of whom had previously undergone vasectomies. This was done out of an abundance of caution, in case the experimental drug was to have any lasting impacts on fertility, Nadja Mannowetz, co-founder and chief science officer of YourChoice Therapeutics, told Scientific American. The participants were split into groups who received either placebo pills or a low or high dose of the drug. Most of the participants took the pill only after a period of fasting, but a subset were also given doses after a large meal, to see if that affected levels of the drug in the body. Across all of the doses and conditions tested, the levels of the drug in the body reached decent levels, but Mannowetz anticipates that, if approved, the final dose would be closest to the highest one tested: 180 milligrams. RELATED STORIES —Hormonal birth control may double risk of stroke, study finds —1st over-the-counter birth control pill approved by FDA —Blood clot risk linked to taking common painkillers alongside certain hormonal birth control Both the animal study and the human trial results suggest that the approved pill would likely be taken once a day, but further trials will confirm that dosing. And although no notable side effects emerged in this small trial, future trials with larger study cohorts will still need to monitor for these effects. "The positive results from this first clinical trial laid the groundwork for a second trial, where men receive YCT-529 for 28 days and 90 days, to study safety and changes in sperm parameters," the study authors wrote in their paper. That trial is already underway and again is looking at males who have already had vasectomies or who have made a firm decision not to father children. This article is for informational purposes only and is not meant to offer medical advice.