国产三级大片在线观看-国产三级电影-国产三级电影经典在线看-国产三级电影久久久-国产三级电影免费-国产三级电影免费观看

Set as Homepage - Add to Favorites

【pelakon lucah yang cantik】AI meets healthcare: How a children's hospital is embracing innovation

Source:Feature Flash Editor:relaxation Time:2025-07-02 22:53:16

While hospitals are pelakon lucah yang cantikaccustomed to dealing with most things viral, they are already starting to study an entirely new kind of viral phenomenon: generative AI in the workplace.

Highly-ranked healthcare facilities like Boston Children’s Hospital, connected as they are to major research institutions, are some of the most prominent customer-facing operations in the healthcare industry.

And given that healthcare represents about 18 percent of the U.S. GDP, of coursethese organizations will want to take advantage of the latest technology that promises a revolution in productivity. 


You May Also Like

Boston Children’s Hospital, consistently ranked among the best children’s hospitals in the U.S., employs a “Chief Innovation Officer,” John Brownstein, an epidemiologist who runs a division called the Innovation & Digital Health Accelerator. Brownstein’s past work combining technology and health includes the creation of a site called “Flu Near You,” which was repurposed during the early days of the pandemic as “Covid Near You” for obvious reasons, according to New York Times Magazine. It still exists in a more general form as “Outbreaks Near Me.” It’s an unsettlingly useful website for tracking pathogens.  

And now Brownstein is turning his attention to AI.

First things first, according to Brownstein: from his standpoint there’s no need to lay anyone off just because AI is invading healthcare. “This is not meant as a replacement for the human,” Brownstein told Mashable in an April interview. “This is an augmentation. So there's always a human in the loop.” 

SEE ALSO: What not to share with ChatGPT if you use it for work

In April, as prompt engineering became a buzzworthy new tech job, Boston Children’s tipped its hand to the public about the fact that change was afoot when it posted a job ad seeking a prompt engineer of its own. In other words, the hospital was hiring a specialist to train AI language models that can improve hospital operations, and in theory, this person is supposed to improve conditions for hospital staff.

According to Brownstein, that’s because his department has a directive to reduce “provider burnout.” Boston Children’s has what he called “an internal team that builds tech.” Their job, he explained, is to locate places in “the world of work” where technology can play a role, but isn’t yet. They literally sit in “pain points” within Boston Children’s Hospital, and devise ways to, well, ease the pain.

What this means in practice is a bit mind-bending.

Easing the pain with AI 

One “pain point” in any hospital is directing patients from point A to point B, a tough exercise in communication that can include speed bumps like confusion due to illness or stress, or language barriers. “Already out of the gate, we can query ChatGPT with questions about how to navigate our hospital,” Brownstein said. “It's actually shocking, what these are producing without any amount of training from us.”  ChatGPT — and not some future version but the one you already have access to — can tell you how to get around “not just our hospital, but any hospital,” according to Brownstein.

So it’s more than realistic to imagine a machine kiosk where patients can receive useful answers to questions like, Brownstein offered, “Where can I pray?” And it’s probably also the hope of many healthcare workers that they don’t have to be stopped in their tracks with questions like that. Not everyone is a people person.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

But Brownstein also has ideas for new ways providers can use patient data thanks to AI.

The idea that AI will be involved in the processing of actual patient data set off alarms for Mildred Cho, professor of pediatrics at Stanford’s Center for Biomedical Ethics. After reviewing the prompt engineer job ad, she told Mashable, “What strikes me about it is that the qualifications are focused on computer science and coding expertise and only ‘knowledge of healthcare research methodologies’ while the tasks include evaluating the performance of AI prompts.”

“To truly understand whether the outputs of large language models are valid to the high standards necessary for health care, an evaluator would need to have a much more nuanced and sophisticated knowledge base of medicine and also working knowledge of health care delivery systems and the limitations of their data,” Cho said. 

SEE ALSO: ChatGPT-created resumes are dealbreakers for recruiters

Cho further described a nightmare scenario: What if the prompt engineer helps retrain a language model, or tweak an automated process, but due to faulty assumptions? For instance, what if they train racial bias, or other persistent mistakes into it? Given that all data collected by people is inherently flawed, a shiny new process could be built on a foundation of errors.

“Our prompt engineer is not going to be working in a bubble,” Brownstein said. His team devotes time, he said, to worrying about “what it means to have imperfect data.” He was confident that the process wouldn’t be:“put a bunch of data in and, like, hope for the best.”

Using AI to customize discharge instructions

But lest we forget, “put in a bunch of data and hope for the best” is an apt description of how large language models work, and the results are often, well, awful. 

For an example where the data needs to be right-on-the-money, look no further than Brownstein’s absolutely fascinating vision for the discharge instructions of the future. You’ve probably received — and promptly thrown away — many discharge instructions.


Related Stories
  • How generative AI will affect the creator economy
  • A lawyer used ChatGPT for legal filing. The chatbot cited nonexistent cases it just made up
  • I asked ChatGPT to build me a workout plan for a bigger butt
  • A new AI trend is 'expanding' classic art and the internet is not happy
  • All the major generative AI tools that could enhance your worklife in 2023

Perhaps you got a bump on the head in a car accident. After getting checked out at the hospital and being cleared to go home, you likely received a few stapled pages of information about the signs of a concussion, how to use a cold compress, and how much ibuprofen to take. 

With an LLM trained on your individual patient information, Browstein said, the system knows, among other things, where you live, so it can tell where to go to buy your ibuprofen, or not to buy Ibuprofen at all, because you’re allergic. But that’s just the tip of the iceberg. 

“You're doing rehab, and you need to take a walk. It's telling you to do thiswalk aroundthis particular area around your house. Or it could be contextually valuable, and it can modify based on your age and various attributes about you. And it can give that output in the voice that is the most compelling to make sure that you adhere to those instructions.”

New tech historically has found its way into hospitals quickly 

David Himmelstein, a professor in the CUNY School of Public Health and a prominent critic of the U.S. for-profit healthcare system, said that while he had heard about potential uses of AI in hospitals that concerned him, this one didn’t strike him as “offensive.” He noted that discharge instructions are “almost boilerplate” anyway, and seemed unconcerned about the potential change.

However, he worries about what such systems could mean for privacy. “Who gets this information?” he wondered. “Sounds like it puts the information in the hands of Microsoft — or Google if they use their AI engine.” 

In widespread use, these are major concerns for hospitals moving forward, but Brownstein said that Boston Children’s Hospital, for its part, “is actually building internal LLMs,” meaning it won’t rely on companies like Google, Microsoft, or ChatGPT parent company OpenAI. “We actually have an environment we're building, so that we don't have to push patient data anywhere outside the walls of the hospital.” 

Himmelstein, however, pointed out that systems for automating hospitals are far from new, and have not created bureaucracy-free paradises, where work runs smoothly and efficiently, even though he noted that companies have been making such promises since the 1960s. He provided a fascinating historical document to illustrate this point: An IBM video from 1961 that promises electronic systems that will slash bureaucracy and “eliminate errors.”

But in the month since Mashable first spoke to Brownstein, the AI situation has progressed at Boston Children’s Hospital. In an email, Browstein reported “a ton of progress” on large language models, and an “incredible” prompt engineer in the process of being onboarded.

Topics Artificial Intelligence Health

0.1384s , 9867.390625 kb

Copyright © 2025 Powered by 【pelakon lucah yang cantik】AI meets healthcare: How a children's hospital is embracing innovation,Feature Flash  

Sitemap

Top 主站蜘蛛池模板: 2024天堂网| 在线观看欧美日韩一区二区三区 | a级国产精品片 | 国产孰妇精品AV片国产m3u8 | 熟妇人妻一区二区三区四区五区o | 精品久久久久久久中文字幕 | 少妇内射高潮福利炮 | 国产日韩亚洲精品视频 | 国产成a人亚洲精v品无码不卡 | 手机中文字幕在线视频 | 久久久久久中文字幕大全免费看 | 亚洲日韩精品欧美一区二区 | 毛片无码免费无码播放 | 国产亚洲精品久久无码98 | 成A人无码成牛牛 | 欧美成人wwe在线播放 | 无码人妻a一区二区三区色戒乐 | 精品久久无码一区二区 | 波多久久夜色精品国产 | 色老头成人免费综合视频 | 成年人国产视频 | 91精品一区二区综合在线 | 在线a亚洲视频播放在线观看 | 国产熟女乱子视频正在播放 | 午夜成人A片精品视频免费观看 | 嫩视频综合 人人草人人干人人 | 天美传媒mv免费观看完整视频 | 国产av电影区二区三区 | 99久久久久精品国产免费 | 91蜜桃传媒精品久久久一区二 | AVAV天堂AV在线网爱情 | 中文字幕亚洲一区二区va在线 | 人人视频精品 国产综合久久久久影院 | 国产成人无码视频一区二区三 | 国产在线码观看清码视频 | 国产免费无码又爽又刺激A片动漫 | 一区二区视频 | 乱子伦xxxxvideos | 精品乱码久久久久久中文字幕 | 亚洲欧美色鬼久久综合 | 国产成人亚洲高清一区 |