{"id":4088925,"date":"2025-11-12T18:10:58","date_gmt":"2025-11-12T23:10:58","guid":{"rendered":"https:\/\/www.computerworld.com\/video\/4088925\/legal-risks-laws-and-how-to-stay-compliant-with-ai-automated-decision-making-tools.html"},"modified":"2025-11-12T18:10:58","modified_gmt":"2025-11-12T23:10:58","slug":"legal-risks-laws-and-how-to-stay-compliant-with-ai-automated-decision-making-tools","status":"publish","type":"video_episodes","link":"https:\/\/www.computerworld.com\/video\/4088925\/legal-risks-laws-and-how-to-stay-compliant-with-ai-automated-decision-making-tools.html","title":{"rendered":"Legal risks, laws and how to stay compliant with AI & automated decision-making tools"},"content":{"rendered":"<div id=\"remove_no_follow\">\n\n\n\n\n\n<div class=\"transcript\" id=\"transcript\" role=\"region\" aria-label=\"Transcript\" data-transcript-blocks=\"22\">\n            <h2 class=\"transcript__title\" id=\"transcript\">Transcript<\/h2>\n            <div class=\"transcript__text\">\n                <div class=\"transcript__list\" role=\"list\"><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>00:00<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"0\" aria-label=\"Jump to 00:00 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith Shaw: As companies look toward adopting more AI and automated decision-making tools to become more efficient, a minefield of legal issues and complications is rising. AI agents on the horizon further muddy the waters when it comes to risk, liability, and accountability.<\/p><p class=\"transcript__paragraph\">On this episode of Today in Tech, we&rsquo;re going to explore the many legal issues companies may face with the rise of agent-tech AI. Hi, everybody.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>00:30<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"30\" aria-label=\"Jump to 00:30 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Welcome to Today in Tech. I&rsquo;m Keith Shaw. Joining me on the show today is Rob Taylor. He is an attorney with Carstens, Allen &amp; Gourley, and he does extensive work in the area of automated decision-making. Welcome to the show, Rob. Rob Taylor: Thanks for having me, Keith.<\/p><p class=\"transcript__paragraph\">Happy to be here. Keith: Let&rsquo;s jump right in. What mistakes are you seeing companies make right now in their deployment of automated decision-making tools? And even if they&rsquo;re technically not using AI, there are a lot of tools out there making decisions, right?<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>01:04<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"64\" aria-label=\"Jump to 01:04 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"5\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Rob: One big mistake I see is that many companies are singularly focused on AI. They focus on compliance with AI laws and regulations without considering that what they&rsquo;re rolling out may actually be classified as ADM &mdash;&nbsp; automated decision-making.<\/p><p class=\"transcript__paragraph\">That&rsquo;s a whole other area of law, regulations, and compliance obligations. ADM isn't just about artificial intelligence; it's much broader. Anytime you roll out a solution that makes automated decisions affecting individuals or their livelihood, you may fall under ADM regulations.<\/p><p class=\"transcript__paragraph\">Another common myth is that companies assume if they aren&rsquo;t making the final decision, ADM doesn&rsquo;t apply. That&rsquo;s not necessarily true. There's a global patchwork of ADM laws &mdash;&nbsp; some cover only final decisions, while others cover interim decisions.<\/p><p class=\"transcript__paragraph\">And even if the law doesn&rsquo;t apply to interim decisions, companies may still face liability under other laws, as we've seen in recent litigation. Keith: When did we start seeing this wave of ADM laws? Is this mostly recent? And are these on the federal level, state level, or international?<\/p><p class=\"transcript__paragraph\">Why were these regulations initiated?<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>03:16<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"196\" aria-label=\"Jump to 03:16 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Rob: It mostly comes down to individual rights. The intent behind these laws is that individuals shouldn&rsquo;t be forced to have consequential decisions made solely by a tool &mdash;&nbsp; they have a right to human involvement. That&rsquo;s the common theme worldwide.<\/p><p class=\"transcript__paragraph\">In some jurisdictions, the developer or deployer of the ADM system must offer individuals an opt-out option so they aren&rsquo;t forced through automated decision-making.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>04:06<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"246\" aria-label=\"Jump to 04:06 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith: So is this a newer development &mdash;&nbsp; within the last five years &mdash;&nbsp; or has this been happening for decades? Rob: It&rsquo;s more recent. As AI solutions become more agentic and impact individuals directly, companies are increasingly falling within the scope of ADM laws.<\/p><p class=\"transcript__paragraph\">Yet many companies aren't even aware ADM applies to them &mdash;&nbsp; they think they&rsquo;re just releasing &ldquo;an AI solution,&rdquo; not realizing it triggers ADM rules as well.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>05:00<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"300\" aria-label=\"Jump to 05:00 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"3\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith: Can you give some real-world examples? What industries are using these tools? Rob: Sure.<\/p><p class=\"transcript__paragraph\">Major areas include: * Credit scoring and creditworthiness decisions * Insurance underwriting and claims decisions * Hiring and talent acquisition &nbsp;&mdash;&nbsp; &nbsp;resume screening, ranking, automated interviews * Employee assessments &nbsp;&mdash;&nbsp; &nbsp;skills tests, behavioral evaluations, cultural fit screening These are high-risk scenarios because they affect an individual&rsquo;s livelihood.<\/p><p class=\"transcript__paragraph\">We&rsquo;ve already seen litigation in hiring before AI, and it's increasing now with AI systems.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>07:45<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"465\" aria-label=\"Jump to 07:45 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith: So if we were doing a job interview and an automated interview tool analyzed my facial expressions or body language, it could impact whether I advance in the process? Rob: Exactly. Even if facial expression analysis isn&rsquo;t the final determinant, it may influence the decision.<\/p><p class=\"transcript__paragraph\">ADM rules are designed to address this &mdash;&nbsp; life-altering decisions shouldn't be made by a tool without disclosure and consent.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>10:00<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"600\" aria-label=\"Jump to 10:00 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith: So the key is transparency &nbsp;&mdash;&nbsp; &nbsp;informing users the technology is being used? Rob: Yes. Disclosure and consent are low-cost, highly effective protections. Even if the law doesn&rsquo;t explicitly require it, it's a best practice. And companies must consider not only interactive use, but also internal AI use.<\/p><p class=\"transcript__paragraph\">For example, Patagonia was sued because a third-party AI tool analyzed customer service calls without notifying customers. Even internal use requires clarity if the AI processes consumer information.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>13:14<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"794\" aria-label=\"Jump to 13:14 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith: Will we start seeing more upfront disclosures? Almost like a &ldquo;This call uses AI&rdquo; message? Rob: I think so. And why hide it? Transparency prevents litigation. But transparency raises another issue: explainability.<\/p><p class=\"transcript__paragraph\">Some ADM laws &mdash;&nbsp; like new California regulations &mdash;&nbsp; require employers to retain data used in hiring decisions for four years so individuals can challenge decisions. This conflicts with typical data-minimization practices, so companies must understand both AI and ADM frameworks.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>17:16<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"1036\" aria-label=\"Jump to 17:16 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"1\"><p class=\"transcript__paragraph\">Rob: Even if ADM laws don&rsquo;t explicitly require data retention, companies may need evidence to prove decisions weren't discriminatory. So retaining data becomes essential. Keith: And hiring is where we&rsquo;ve seen major bias issues &nbsp;&mdash;&nbsp; &nbsp;resumes filtered by gender, race, age, university, etc. What should companies be watching for?<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>18:49<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"1129\" aria-label=\"Jump to 18:49 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"1\"><p class=\"transcript__paragraph\">Rob: Bias can arise unintentionally, especially when models learn from historical hiring data. Even if companies exclude protected attributes, AI can infer correlations &nbsp;&mdash;&nbsp; &nbsp;like universities attended &nbsp;&mdash;&nbsp; &nbsp;that disproportionately impact certain groups. Most bias is unintentional, but still actionable.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>22:26<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"1346\" aria-label=\"Jump to 22:26 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"1\"><p class=\"transcript__paragraph\">Companies need to proactively identify weak points in their system and test for bias.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>24:09<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"1449\" aria-label=\"Jump to 24:09 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith: But with thousands of resumes, humans can't review them all either. So both humans and AI introduce bias. Rob: True, but AI creates scale. If AI filters 1,000 resumes and ranks only 20, recruiters start there and may never see qualified candidates outside that group.<\/p><p class=\"transcript__paragraph\">We've seen this in lawsuits &mdash;&nbsp; such as those targeting Workday, where AI allegedly screened out applicants without human review.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>27:14<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"1634\" aria-label=\"Jump to 27:14 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith: Interestingly, they're suing the developer rather than employers. Do you expect more of that? Rob: Yes. Courts may hold developers liable when models are inherently biased. Developers are the ones who understand the technology, and plaintiffs often target deep-pocket defendants.<\/p><p class=\"transcript__paragraph\">We&rsquo;ll likely see shared responsibility across developers and deploying companies going forward.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>30:00<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"1800\" aria-label=\"Jump to 30:00 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith: Do we need more laws, or do existing laws already cover most situations? Rob: That&rsquo;s one of the biggest myths &nbsp;&mdash;&nbsp; &nbsp;that without AI-specific laws, it's the Wild West. Existing laws like employment discrimination and consumer protection often apply.<\/p><p class=\"transcript__paragraph\">We don&rsquo;t need laws to specifically say, &ldquo;You cannot discriminate with AI.&rdquo; Discrimination is already illegal, regardless of whether a human or AI does it.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>32:41<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"1961\" aria-label=\"Jump to 32:41 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"1\"><p class=\"transcript__paragraph\">Product liability principles are also emerging in AI cases, and the law will evolve as needed.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>33:40<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"2020\" aria-label=\"Jump to 33:40 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"1\"><p class=\"transcript__paragraph\">Keith: Where will we see ADM litigation next? Rob: Anywhere automated systems make consequential decisions about individuals: * hiring * credit * Insurance * healthcare * college admissions<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>35:29<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"2129\" aria-label=\"Jump to 35:29 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith: So what should companies do? Who do you advise &mdash;&nbsp; legal teams, developers, tech teams? Rob: All of them. To evaluate risks, you need to understand engineering, data flows, decision logic, and human oversight. Many companies form AI governance committees, but they lack AI expertise.<\/p><p class=\"transcript__paragraph\">That&rsquo;s a mistake &mdash;&nbsp; someone with AI knowledge must be involved, or they should bring in experts.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>39:36<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"2376\" aria-label=\"Jump to 39:36 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"1\"><p class=\"transcript__paragraph\">Poor understanding leads to failed AI deployments. We&rsquo;ve seen reports that ~95% of AI deployments fail or don&rsquo;t deliver ROI, often due to lack of expertise. Companies are now increasingly hiring AI consultants and experts instead of relying solely on internal teams.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>41:07<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"2467\" aria-label=\"Jump to 41:07 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"1\"><p class=\"transcript__paragraph\">Keith: So do things get better or worse from here? Rob: Litigation teaches lessons. Smart companies learn from others&rsquo; mistakes and avoid easy-to-prevent lawsuits &nbsp;&mdash;&nbsp; &nbsp;like failing to notify customers about AI use. But many still don't know what they don't know.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>42:43<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"2563\" aria-label=\"Jump to 42:43 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"1\"><p class=\"transcript__paragraph\">It's an exciting time. Technology and laws are rapidly evolving. Liability frameworks are developing, and companies need to stay ahead.<\/p><\/div><\/div><div class=\"transcript__item\" role=\"listitem\"><div class=\"transcript__item-meta\">\n                    <time>43:30<\/time> \n                    <button class=\"transcript__jump-button\" data-time=\"2610\" aria-label=\"Jump to 43:30 in video\">\n                        <i class=\"icon-play\"><svg><use xlink:href=\"#icon-play\"><\/use><\/svg><\/i>\n                    <\/button>\n                <\/div><div class=\"transcript__content\" data-paragraphs=\"2\" data-readmore=\"Show more\" data-readless=\"Show less\"><p class=\"transcript__paragraph\">Keith: Rob, thanks again for joining us and walking through these legal liability issues. Rob: Thank you for having me. Great discussion. Keith: That&rsquo;ll do it for this week's show. Be sure to like the video, subscribe, and comment below.<\/p><p class=\"transcript__paragraph\">Join us every week for new episodes of Today in Tech. I'm Keith Shaw &nbsp;&mdash;&nbsp; &nbsp;thanks for watching.<\/p><\/div><\/div><\/div>\n            <\/div>\n        <\/div>\n\n<\/div>\n","protected":false},"author":1701,"featured_media":100068731,"template":"","meta":{"__idg_published_ids":[],"__idg_published_status":"draft","embargo_date":"","multi_title":"[]","old_id_in_onecms":"","_idg_updated_flag":false,"_idg_updated_date":"","hreflang_xdefault":0,"content_type":"","suppress_html_meta":"{}","byline":"","featured_video_id":0,"supress_floating_video":false,"prevent_index":0,"has_duration":0,"teaser_paragraphs":"","is_translated_post":0,"idg_original_post_id":0,"idg_translated_post_ids":[],"idg_original_post_publication":"","idg_original_post_language":"","idg_original_post_brand":"","right_panel_heading":"","video_type":"youtube","podcast_episode_id":0,"yt_video_id":"tyYjRgEZA4k","yt_video_duration":"44.52","jw_video_id":"","jw_video_duration":"","jw_video_title":"","episode_ordering":350,"is_hosts_enabled":false,"is_guests_enabled":false,"suppress_monetization":"{}"},"categories":[1885,2888,2275,2542],"tags":[],"languages":[21],"editions":[12],"publication":[9,10],"sponsorships":[],"video_series":[6607],"coauthors":[],"class_list":{"0":"post-4088925","1":"video_episodes","2":"type-video_episodes","3":"status-publish","4":"has-post-thumbnail","6":"category-artificial-intelligence","7":"category-generative-ai","8":"category-regulation","9":"category-risk-management","10":"languages-en","11":"editions-global","12":"publication-computerworld","13":"publication-us-default","14":"video_series-today-in-tech"},"eyebrow":{"eyebrow":"Video","eyebrow_style":"default","eyebrow_feed_title":"Video","eyebrow_feed_style":"default"},"_links":{"self":[{"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/video_episodes\/4088925","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/video_episodes"}],"about":[{"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/types\/video_episodes"}],"author":[{"embeddable":true,"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/users\/1701"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/media\/100068731"}],"wp:attachment":[{"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/media?parent=4088925"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/categories?post=4088925"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/tags?post=4088925"},{"taxonomy":"languages","embeddable":true,"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/languages?post=4088925"},{"taxonomy":"editions","embeddable":true,"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/editions?post=4088925"},{"taxonomy":"publication","embeddable":true,"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/publication?post=4088925"},{"taxonomy":"sponsorships","embeddable":true,"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/sponsorships?post=4088925"},{"taxonomy":"video_series","embeddable":true,"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/video_series?post=4088925"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.computerworld.com\/wp-json\/wp\/v2\/coauthors?post=4088925"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}