AI Driven Healthcare: The Inversion of Patient Privacy
April 22, 2026
Host: Hon. Sam Rohrer
Guest: Twila Brase
Note: This transcript is taken from a Stand in the Gap Today program aired on 4/22/26. To listen to the podcast, click HERE.
Disclaimer: While reasonable efforts have been made to provide an accurate transcription, the following is a representation of a mechanical transcription and as such, may not be a word for word transcript. Please listen to the audio version for any questions concerning the following dialogue.
Sam Rohrer:
Hello and welcome to this Wednesday edition of Stand in the Gap today. And it’s also our monthly focus on Health Freedom with Twila Brase, president and co-founder of Citizens Council for Health Freedom with their website at cchreedom.org. Now on previous programs, Twila and I have identified and discussed in detail here on Standard Agap today, the leading efforts and movements within the United States that have been attacking and undermining the core of patient health freedom, most of which revolves around the collection and the usage of private patient doctor discussions and diagnosis and treatments, but done without patient knowledge and consent. Now, into this entire area of assumed privacy that government policy has stepped into supposedly to protect patient privacy for you and me, making us think that all is quiet and confidential, but predictably when government gets involved predictably, it turns this private concept, this relationship of private in secrecy and information of that type that should be confidential.
It turns it into a transactional money and profit driven exploitation that is enabled, I’m going to say, through the presentation of lies and deception. In other words, when does government ever come towards us and say that we are now going to lie to you. We are now going to steal from you. We are now embarking on an entire program of subterfuge, right? That never happens. It never happens. But with the introduction of AI technology into what has been happening, this deception and theft of private health information, in my opinion, have been placed on steroids. And it’s a general area that will be our theme for today’s program. The title I’ve chosen to frame today’s program is this, AI-driven healthcare, the inversion of patient privacy. Inversion meaning the complete turning upside down. Anyways, with that, Twila, welcome to the program. It’s always great to have you back.
Twila Brase:
Well, it’s always great to be here, Sam. Thanks so much.
Sam Rohrer:
You’re welcome. Well, we have a lot to talk about in things, all of what we’re talking about today, you’ve been involved in and are. But in preparation, just shared just a couple things. For today’s program, I utilized, for example, we’re talking a lot about AI. I use an approach to AI to, for instance, help scour the internet for the most commonly accepted, this was my question. Find the most commonly accepted and unbiased identifications of those policies or efforts which threaten the historic doctor patient relationship and the assumption of healthcare privacy. So I worded that carefully, but this is what it came back with. And I didn’t know what I would find, but this is what they said. Identified eight examples. One of those was expanding mandatory reporting requirements. Secondly, was strategic HIPAA data deception. Third was interoperable electronic health records. Fourth was information blocking and data sharing mandates.
The fifth was ensure and payer access to full clinical records. The sixth was repeated large scale healthcare data breaches. Seventh was secondary use of clinical data for surveillance and analytics. And the eighth one was regulatory emphasis on efficiency and data flow over individualized confidentiality as a primary ethical value. Now, Twila, matter of fact, I know you and I have highlighted all these things in the past to one degree or another, and your organization, Citizens for Council for Health Freedom, has been working hard to bring these to light. So here’s my first question. Given a comment on information that I just presented, anything you’d want to make, and then from your perspective, what’s happening as a result of AI stepping into the doctor-patient relationship and how is that further eroding health freedom?
Twila Brase:
Yeah. Well, you have created quite a list there and it’s quite accurate. I think there’s so much happening in the healthcare space that patients don’t know about and they don’t understand. And just listening to your list, I thought, well, okay, here’s a few questions maybe related to that list that could bring out ethical issues. So going alongside your list, I would say, do patients know that their medical record data is the new gold, the new oil? I actually don’t think people understand that. Do they know that their data and their doctor’s decisions are all being reported to the government in various ways? Do they know that if a doctor sees just one Medicare patient, that according to the federal government, they can have access to the data of every patient in that doctor’s practice and require the doctor to send that information to the government as a way to check the quality of the doctor, which is a whole other discussion.
But anyway, I think a lot of people don’t know that HIPAA is a permissive data sharing regulation that took away their right, and I think they don’t know that it allows all of this cast and crew of outsiders to slice and dice the information and control their doctor. And maybe another question, given what you have said, is do patients know that efficiency has become a watchword in the industry? They want your care to be delivered efficiency, but when you’re a patient, do you want efficiency or accuracy, efficiency or
Sam Rohrer:
Time
Twila Brase:
Spent listening to you? Do you want an efficient five minute visit or a visit that gives both you and your doctor time to figure out what’s really going on and address the real issues? And I’ll add one more in that vein because I don’t think that all the paperwork today is efficient, but that’s what doctors are being required to do. So there was just a study recently of a practice and they looked at all the doctors and all the clinicians and their practice and they discovered that on average, the clinicians were spending 11 minutes with their patients and 37 minutes with the paperwork. So three to four times more time is spent on the paperwork, the reporting, dotting the I’s, crossing the Ts, filling out these electronic screens and going into the computer and even going after they’ve had dinner with their family. None of that is efficient, but they’re all talking about efficient, but they’re really not making it efficient.
They’re using the entire healthcare system for their own purposes of data collection.
Sam Rohrer:
Okay. And I’ve got a quick question. We’ve gone out of time here, but you have written about actually AI stepping into the doctor’s office and you’ve used the word scribes. Talk about that just briefly. What do you mean?
Twila Brase:
Yeah. So scribes, everybody is pretty much aware of the medical scribes. Those are the people that walk in behind the doctor with the computer and they start to type in every word that you’re saying, but the AI scribes are actually just a recording. So the doctor brings in his phone, the doctor brings in her computer, or they have here’s what, microphones within the room itself. And so some patients have sent me pictures of a sign that’s in their exam room and it says, “Ambient listening.” If you stay in this room, you have agreed to have the entire session recorded. So this is a third person in the waiting room, in your room, and we don’t even know who that person is or where that data goes.
Sam Rohrer:
Okay. Now, ladies and gentlemen, stay with us because we’re going to go further into that aspect of AI in the office of the doctor or wherever, writing, collecting, analyzing all of that, which AI does all for efficiency, of course. All right, we’ll talk about that because there’s a downside. Believe me, there is. Stay with us. We’ll be back in just a moment. If you’re just joining us today, welcome aboard. This is our monthly focus here today on what we call health freedom. And my guest, as always, on this focus is Twila Brase. She is the president and the co-founder of Citizens, Council for Health Freedom. They have a website at cchfreedom.org. And the theme today is this, AI-driven healthcare, the inversion that’s turning upside down, of patient privacy. Now, Twila, going beyond that, I think one of the most dangerous strategies of eroding health freedom, in my observation, is perhaps the substitution of truth with pragmatism, what works, and the moving from the patient’s best interest to somebody else’s best interest, money, efficiency when talking about that, or a transactional one size fits all measurement.
And AI and its mandatory insertion, mandatory meaning government is pushing it hard into that space between the physician and his or her Hippocratic oath and the patient’s best interest. And I’m going to say between the view of the patient as being an individual created in God’s image ties in closely with the Hippocratic oath and turning that person into a sheer measurable digit or an asset with no individual value, to me, those are two mutually exclusive positions and this effort to get between the patient and the physician, between the physician and the Hippocratic oath and to separate the individual with value, creating God’s image into just something to be studied, they’re incompatible. They don’t go together. One’s going to prevail. And based on broadly collected research of concerned citizens, I think this is where they converge. One is this. They say medicine cannot function when patients believe they are being observed or recorded and evaluated by systems rather than by a trusted human professional.
They caution that without firm limits, transparency, and genuine patient choice, modern policies, including AI technologies, risk turning patients from moral agents into data subjects, thereby eroding health freedom, clinical judgment, and the ethical foundation of care. So I’ve alluded to that, but that’s what I found that many, many physicians actually believe. So Twilio, any comments on what I just expressed about AI efficiency and patients being reduced to observable digits of transactional research versus people with individual worth created in the image of God. Are you seeing that happen? And do you think that’s something that really goes to the heart of why we’re seeing health freedom so undermined?
Twila Brase:
Well, I think that Americans have to wake up to what’s happening in the healthcare system, but the problem is that most people aren’t in the healthcare system. They’re just running around with their daily lives and only when they get into the healthcare system do they start to see the problems with it. But what you’re really talking about there is the fact that the mission of medicine has been taken over by the business of healthcare, and this is the doing of government, because if government didn’t let this happen, if government had actually let us have real major medical insurance, if government hasn’t been pushing toward, and Congress hadn’t been pushing towards socialized medicine, even in the corporate version, right, we wouldn’t be here. But we are now at a situation where those who are in the business of healthcare are exploiting the patients and the doctors for their own purposes, including to profit from the patient’s data.
And that kind of profit means not only slicing and dicing the data, selling analytics, but also controlling the doctors so not as much care is given, or limiting the number of patients that are even seen in a day because they have the doctors doing so much paperwork, they can’t be doing patient care. So bringing in the machines, whether that’s the recording device in the exam room through ambient listening, or the computer system that is populated by AI driven protocols, standardized one size fits all protocols, all of this puts the mission of medicine in jeopardy. And I would say that this movement that you are talking about is a violation of the mission of medicine. It’s a violation of humanity, and it completely violates the fact that it is … I’m speaking as a nurse too. It is an unchangeable reality that people cannot truly be cared for by machines.
They need people, they need personalized medical decisions, they have individualized DNA, individualized family situations. It needs to be personalized, and they need compassion, charitability, and listening ears and human touch. There was a recent study about AI chats, which I cannot remember the details of, but I think this is what it showed, is that even though you send somebody to a chatbot, somebody who will talk back to them, people’s anxiety is not reduced like it is when they talk to a human being. I think that was the essence of the study, but you’ll have to look it up to see if I’m correct, but it was something like this. People need humans, not machines.
Sam Rohrer:
And I think everybody listening to me, we know that to be true, either experientially or even intuitively. I mean, looking at a computer screen, which I am doing right now, I do on the radio program, and you probably are too, is so much different than if I were talking to you right now, Twila, and looking in your eyes or talking to my physician and looking in his or her eyes and being able to see body movement and there’s so much more communicated than just the words that come out of my mouth, but yet AI is hearing only words. And so it goes underneath of what you’re saying and supportive. I think we intuitively know that. But in this part of AI in the office, you’ve used the word scribes. I want you to talk about what is actually being scribed as an example, and then you’ve used the word ambient listening, where it’s actually listening and doing whatever.
Here’s my question. What are some of the consequences to this aspect? You’ve actually turned beyond listening of AI and how that directly competes with and undermines health, freedom, privacy, and more. Just kind of illustrate. Again, it’s obvious when we think about it, but make it more clear.
Twila Brase:
So once upon a time, if there was going to be anybody else in your exam room, you were asked because this was a sanctuary, this was a safe space. This was a place where you could talk to the doctor and say whatever needed to be said in order to get the care that you needed. In the olden days, even if you were a criminal, you could go to the doctor in safety because it was a safe space, a sanctuary. And so now though, because of the electronic health record system that the government has imposed, which requires all of this data to be collected from the doctor or the hospital to even get paid, now we have doctors and hospitals trying to figure out how we can have patients kind of listen to, but still get all the data done. So now scribes are coming into the room.
You didn’t ask, they didn’t ask you whether you could have a scribe in the room who’s listening to every word that you say to your doctor, and then they’re going further and they’re saying, “Well, we won’t bring a person in. We’ll bring a machine in that will record every word that you say.” We’ll send it to somewhere where it will be transcribed. It could even be India, for all we know, where it will be transcribed, sent back to the doctor, and the doctor will make decisions about whether or not those were accurate things that happened in the exam room at some time after the actual exam. So there’s several things here. One, it’s a major intrusion in the exam room, no matter what kind of scribe that you have. Two, everybody needs to know that they can tell the doctor to send the scribe out of the room.
By that I mean the physical person who’s a medical scribe. Two, three, everybody should ask the clinic and ask the doctor before they even begin, “Is there a scribe recording what is happening in this exam room? Because if there is, I want you to shut off that machine.” I actually had a doctor come into the room with his phone and he said, “I’m going to record this meeting.” I said, “No, you aren’t.” Well, he said, “Do you mind?” And I said, “Yep.” And he said, “But I won’t get as much … It won’t be as comprehensive of information.” And I just looked at him, I didn’t say anything, he shut it off, right? “Do you really want every word that you say to be recorded? What do you happen to say about your husband? Happen to say about your child? What do you happen to say about your life?
What do you just throw out there as a quip and now it’s suddenly recorded in a permanent medical record that because of HIPAA, the electronic health records systems, the state health information exchanges and the national eHealth exchange, suddenly anyone in the entire country who has access under HIPAA, which is many, many, many, many people, can see this information that you just absent mindedly said to your doctor, right? This is what people
Have to realize. Sometimes this will happen where they just come in like that doctor did. Sometimes it’ll be on a separate consent form. Sometimes it will be embedded in a multi-consent form like, ” I agree to treatment, I agree to be billed, and there’ll be just one on the line. I agree to ambient listening. “Or like somebody sent me a picture of a sign in their exam room and your listeners are free to tell me however this happens because we’re gathering information, but this person sent me a sign in the exam room that said,” If you stay here, you’ve agreed to have your entire session recorded. “So we have to understand what’s going here. This is the business who is taking out the humanity and bringing in the machines to keep it for their own purposes and their profitability.
Sam Rohrer:
And with that, ladies and gentlemen, we’ve all been there and I know when the pressure is there, like you’re there, you’re sick, you need to talk and somebody says,” All right, it’s got to be recorded. “All these things we’re just talking about, and you say,” Oh man, I don’t really want that to happen, but okay. All right, well then it happens and it’s forever recorded. And I guarantee you, as we’re talking, it’s AI interpreted, it stays forever, but it’s valuable information and it’s very intrusive on one’s privacy. It’s dangerous, frankly. So in any regard, you get the idea, we’re talking about that AI driven healthcare inversion of patient privacy, be back and stay with us. All right, Twila, this information today that you’ve been sharing we’re talking about is really, I think, significant and a lot of it is probably new to people, although some, like you were just saying, having walked in perhaps to doctor’s offices and seeing a sign and how that’s all affected.
I just want to say again, if somebody is listening and they maybe have an example of something to share about the use of AI in the doctor’s office or requirements for providing endorsement to things be recorded and so forth, you said they could submit it to you. How would they do that?
Twila Brase:
They could just go online and we have the contact. I mean, the really simple way is just info@cchfreedom.org, but that is also at the bottom of the website or some of our feedback options. Just go in there and send it to us info@cchfreedom.org.
Sam Rohrer:
Okay, great. And I encourage all of you listening to do that because Twila can actually act upon that and it does help to have actual examples. So that being said, Twila and some of your material as well, you’ve noted this and you said, one thing I looked at it, you said, “April 14th, 2026, okay, just passed about a week ago, mark’s 23 years since the HIPAA privacy rule.” You put that in quotes privacy rule went into effect. Most Americans, you said, believe HIPAA protects their medical privacy, but it’s actually a disclosure regulation that permits up to 2.2 million entities, including one and a half million business associates to access patient data without consent. You want to say HIPAA originally required consent. The industry lobbied to eliminate it and did so in 2001. HIPAA is the legal infrastructure that makes ambient listening AI that we was talking about, smart exam rooms and the entire digital surveillance of medicine possible, including audio recordings, transcripts, and AI generated notes, all of them to flow to third party vendors under business associate agreements without patients ever knowing.
“Now that was lengthy what I just shared, but I thought it was all helpful in setting up this next aspect of it. And that is under the guise of providing privacy, HIPAA, until we began talking about it, you’ve been talking about it, and then we’ve shared it on this program, most people would have never had any idea that HIPAA was actually a government permitted access to privacy and a violation rather than protection. Here’s my question, was HIPAA ever, ever, and someone already answered that, but ever a true patient privacy protection in that 2001 date, is that the date where it actually became just really legalized access to privacy?
Twila Brase:
So HIPAA was never a plan to protect your privacy or have consent. As a matter of fact, when Congress passed the law, and the law, by the way, is embedded in a bigger law. So the actual law HIPAA is the Health Insurance Portability and Accountability Act of 1996, HIPAA. The P does not stand for privacy, it stands for portability of insurance. So just clarifying that little piece, within this larger bill about health insurance was a section called administrative simplification, and in that Congress said that our data could be digitized and that they would write a privacy law to protect us once it got digitized and they had three years to do that. And of course they didn’t want to do that because their plan was to take away your privacy because by mandating or allowing digitization of your data, putting all of your records in a computer accessible online through the internet and the cyberspace, they were taking away your privacy.
They never wrote that law. They let the Secretary of Homeland Health and Human Services do it. And when she came out with her recommendations for what should happen, she said, she wrote,” There’s an age old right to privacy, but today the data should be available for publicly useful purposes. “So the plan of HIPAA and the rule that came out of that bigger law for that little administrative simplification section was never about protecting privacy, but about gaining access to whatever outsiders thought were publicly useful purposes and now of course privately useful purposes in profit and all sorts of things. And yes, the date that you’re talking about, yes, the Clinton administration, after seeing 50,000 public comments that come in and saying,” Don’t take away our consent rights, “they changed and put back in consent for payment, consent for treatment, and consent for healthcare operations. And a huge definition, 400 words, nearly 400 words long, they put consent requirements back in.
And then when Bush got in, the health plans came and said,” Oh no, we can’t have that. We need to not have consent for all those things that we want to use. “And so it was taken out. And so now we have HIPAA how it is today. The plan was never for privacy. It was always about taking away privacy. It was always about getting at all of our data, but it just had different iterations until it got to where it is today where there’s no consent requirements in the rule.
Sam Rohrer:
So the only reason that we as patients or anyone may be presented a form, and that still is, sign this HIPAA form for your protection, they want you to sign it, but it’s not for protection. But even if you sign it or you don’t have it, you’re saying under the law, the information that would be collected in that office or the procedures done or whatever is automatically by the nature of HIPAA originally and post-Bush change after the Clinton thing, just is a clear open conduit to third party and to government. So there is no protection whatsoever. It’s an open green light access. That’s what you’re saying?
Twila Brase:
Well, what I would say is that is absolutely correct. Unless you have a state who realizes this and legislators say,” Hey, we’re going to give our people and ourselves as legislators back our privacy rights and write a state privacy law that brings back consent requirements. And in that case, those consent requirements will protect people against HIPAA. They will protect people from HIPAA if a real privacy law is written at the state level. The other thing I want to say is just a little correction of what you said in the beginning, that 2.2 million entities who can have access to your information, yes, they can have access if there’s an if. If those who hold the data choose to share it, they cannot automatically have it, but there are 702,000 entities that have access to your data because they are the hospitals, they are the clinics, they are the labs and all of those sorts.
Whoever has your data is not required to give it to anyone except the government and you. But there are 2.2 million entities that they could give it to if they choose to give it and they do not have to give your consent.
Sam Rohrer:
All right, but it’s not protection. HIPAA and the P and HIPAA is not privacy, nor was it ever. It’s portability and that changes everything, doesn’t it? Let me ask you this question because people are probably thinking and say, “Well, all right, if that data to government or that data to private entities over 700 that could have access to it potentially, what is the value? Why is it so important to any one or more of those entities that they know what happens in the doctor’s office and they have access to individual patient health records? What do they do with it that makes it so valuable?”
Twila Brase:
There’s a lot of different places and reasons of value within it. So one is just the data industry itself. So United Health Group, for example, has a subsidiary called Optum Insights. All it does is slice and dice data and sell data and put out data, all of this sort of thing, right? In 2023, I believe that their … Yeah, I think 2023, I believe that their revenue was $18.9 billion. So there’s just the value of data all by itself, but then there’s the value of data to control the doctor, to have less money going out the door to keep the doctor from giving the more expensive hip replacement or to keep the doctor from giving the treatment that works and instead put you, the patient through step therapy using ineffective by ineffective by ineffective medication until he can get up to the one he knows works, but he’s required to do this, right?
That’s data. There’s also research and research projects, and if they can get their hands on the data, or there’s research agendas, or there’s just policy agendas, where they can get this data, they can slice and dice it the way they want. They can put out a paper that says X, that says, “This is good, this works.” I mean, pharma can do this for the medications and then pick the people that they want or pick the kind of research that they want and then pick the data and put it out and say they had all of these records and this is what they found. There are just different ways that you can use the data for your own agendas, whether it’s the profit hearing agenda, a research agenda, a rationing agenda, there’s just so many.
Sam Rohrer:
Okay. And that makes sense. And is it important just about out of time that, can they just use data? If they were just to use data without anybody’s names associated Would that be one thing or does the problem come when you actually put somebody’s name with the data or both?
Twila Brase:
Oh, that’s such a great question. So I will say that you have to understand that the data is yours. You are not property. Your information is not somebody else’s property. And just because they strip off your name doesn’t mean that they get to use it for their own purposes, get to profit off of it. Why don’t they share the profits with you? Why don’t they ask you to sell your data to them, right? So it’s not just about name or no name.
Sam Rohrer:
Okay. And I think that’s a great point. I like the way you answered that because ladies and gentlemen, it does come down to that. If it’s valuable name or known name, why not share the benefits back, right? Okay. It makes sense. When you come back, we’re going to move to another area where health freedom is impacted. And that’s happening in babies and a DNA. That’s right. Baby DNA. This is our monthly focus on health freedom. Twila Brase, who’s the president and co-founder of Citizens Council for Health Freedom has been my guest as she always is on this emphasis. They have a website at cchfreedom.org. And the focus today has been on a major, I’m going to say it, the enemy, put it that way, of health freedom, AI, artificial intelligence. We’ve had many discussions about that on many levels, but the theme today is AI-driven healthcare because increasingly it is AI that’s driving healthcare.
The analysis, the diagnosis, the prognosis, the treatment, all of that. And we’re calling it today the inversion of patient privacy because literally patient privacy. The fact that when we go into our doctor and he or she closes the door and we share what’s happening and how we’re feeling inside or he begins to diagnose for treatment, there are many times that people go in and there are things that they frankly don’t want the world to know about. And they are thinking that that conversation is between them and whoever they choose to let them know, spouse perhaps or whatever, obviously, and that doctor. And it does not go beyond that wall. Used to be a safe space, like Twila was saying just a little bit ago, but not anymore. And that really makes a major, major difference. And so I shared earlier how physicians themselves who have been driven by their oath to the Hippocratic Oath to do no harm and view their discussion with the patient to be a very, almost like a sacred relationship.
And really it is. That now that the AI and the governmental invention is coming between them and their oath, between them and their patient, and it’s complicating everything that we have come to believe about privacy and healthcare. Now that being said, Twila, there’s another area of great concern which you have been speaking about and I want you to comment on that and directly impacts health freedom. And that is the area of newborn or baby newborn DNA and the collection of that. There are a lot of things that happen. Any of us who have had children, there are many things that those who are in positions of authority, the medical, want to either jab into our babies right when they’re born, that either vaccines of some type or who knows what, and parents have to step in and say, “Wait a minute, wait a minute, wait a minute.” No, no, no, no, no.
Not now or never, perhaps, however. But this matter of collecting newborn DNA, that’s becoming an issue. Share with our listeners about this practice, what it is, why it’s happening, and why it’s a concern.
Twila Brase:
Okay. So let’s just start with newborn screening. So newborn screening should have the word genetic in it because it is the nation’s largest population-wide government genetic testing program. And people do not realize this. And many people, when they have a baby, don’t even know what happened. They do see that there’s a bandage on their baby’s heel, but they took the baby and so they never saw it happen. They didn’t hear the baby cry when they pricked the heel of the child and dripped the child’s blood onto special filter paper and then sent that card with the filter paper to the state’s public health laboratory. So often this is just really outside of the parent’s frame of reference for the whole delivery and birth because they are in a fog, they are exhausted and they are excited. But this is actually genetic testing. It’s been going on for a very, very long time.
States started keeping these cards, essentially keeping the DNA of every child. We got wind of this in 2003. We called for Governor Palenti here in Minnesota to get rid of the baby DNA warehouse of all of these cards. A reporter in Texas got wind of what we were doing here in Minnesota, found the same thing happening in Texas. 5.2 million cards of babies with their DNA were being kept by the government. And as a reminder, as adults, the government cannot take your DNA unless you’re a criminal or suspected to be a criminal. But all the governments are keeping all this DNA of citizens, newborn citizens, starting at birth. Anyway, so we’ve had a lot of impact on this. Now there’s only about 16 states that keep it for 10 years or longer. Some of them like Michigan and Minnesota keep them indefinitely. There have been five lawsuits and now what’s happening is the federal government for the second time is doing a research project on sequencing newborns at birth.
And this is called the Beacons NBS, Newborn Screening and BS, the BEACONS Program, the Beacons NBS program. So they are providing $14.4 million to determine the feasibility of using newborn screening programs, newborn genetic testing programs, the government programs, to sequence the DNA of babies. Sequencing means that they detail the entire genetic code and they put it in a record. Now, in this particular project, they’re saying that they’re only going to sequence the children for 777 conditions that could happen before the child turns one. But right now in newborn screening, newborn genetic screening, there’s somewhere between 30 and 80 newborn conditions that are tested for. This new sequencing program to look at sequencing babies at birth would look at 777 conditions. When I asked the proponents here in Minnesota at a half day session on this, and I said, “Well, so you’re going to sequence for 777?” And they’re like, “Well, we’re going to sequence the entire genome.
We’re going to know the complete building block of the child, but we’re going to shield everything about it except for the 777 conditions.” So they’re not going to look at the gene or the series which would show that they might get Alzheimer’s when they turn 70 or something, right? But people need to know that this is where the government is going. They want to sequence newborns at birth. They say that this is going to help with newborn conditions and it’s going to help science. It’s going to help prevent these kind of conditions from happening, but this is a genetic code held by the government without the consent of the child. Now, this is an opt-in program currently for this $14.4 million grant, which is going to six states plus Puerto Rico. So they’re going to have seven areas where they’re going to try the feasibility of this and see, how do the parents respond to the request to do this to their child?
Are parents saying no? Will parents say yes? If they say no, why are they saying no? This is an entire research study on the feasibility of sequencing children at birth. That’s a very big concern. And the other concern that a lot of people don’t even think about is something that’s called collateral damage. This was a study done, I think in Boston back in 2010, where they talked about the more conditions that you test a child for, the more times there will be a false positive. In other words, a finding comes out that something is wrong with the genetics of the child, something is wrong with the metabolics of the child. They call up the parents and they say, “You’ve had a problem with the newborn screening program. We’re going to send you to a doctor to fully screen you again for newborn screening.” And what they found is that once the parents receive that first message that something is wrong with your child, even though the second screening showed that nothing is wrong with their child, the parents often clung to the first result and said basically the second one is probably incorrect.
So now they distance themselves from their child. They didn’t bond with their child. They changed the eating behavior with their child for years and years beyond when the condition could have popped up, things like
Sam Rohrer:
That. Okay. Twila, we’re out of time. You’ve laid out so many different consequences. Thank you so much for everything shared on the program today. So very helpful. Ladies and gentlemen, you can find this program again. I encourage you to go back and get it on our website, standinthegapradio.com, and then you can access all of Twila’s information and report things you found in the office that would relate to consent and that kind of thing by doing that on her website at CCHfreedom.org.


Recent Comments