EP 10: Responsible AI: Guidelines for Business Success

Navigate the World of Responsible AI Guidelines and Regulations

In this episode of Tech UNMUTED, George and Santi delve into the world of responsible AI exploring the principles behind building a responsible AI framework.

From privacy and transparency to continuous learning and improvement, they discuss the essential elements that ensure AI is used ethically and effectively. They emphasize the importance of accountability, reliability, and safety in AI systems. As they navigate the complexities of intellectual property and fairness, they shed light on the evolving landscape of AI guidelines and regulations.

Join George and Santi as they unravel the intricate world of responsible AI, guiding listeners toward a future where innovation and ethics go hand in hand.

Watch & Listen

Tech UNMUTED is on YouTube
Catch up with new episodes or hear from our archive. Explore and subscribe!


Transcript for this Episode:

INTRODUCTION VOICEOVER: This is Tech UNMUTED. The podcast of modern collaboration – where we tell the stories of how collaboration tools enable businesses to be more efficient and connected. With your hosts, George Schoenstein and Santi Cuellar. Welcome to Tech UNMUTED.

GEORGE: Welcome to the latest episode of Tech UNMUTED. Today, we're going to talk about building a responsible AI framework.

SANTI: And responsible AI can mean different things to different people. We need to-- It might be helpful to just list out some of the things that you want to look at as let's call them principles, I guess. Let's just talk about some principles to responsible AI but I'd be interested to see what this list looks like. It is going to become, I think, a standard practice for companies to actually have a legal entity, a legal document that specifically outlines response way. I can see that happening real soon.

GEORGE: Sorry, you see a lot of words on the screen, bunch of different categories every day. Privacy, transparency, accessible, et cetera. We're going to walk you through a couple of slides with some definitions on these. There's 10 of these that are on the page that we've identified that we're actually using as a framework internally to start to build out our own AI approach, policies, et cetera. These will be a good guide for folks to start to figure out what they want to do from an AI standpoint.

SANTI: Yes, I can't wait to peel back on these individually. I got to tell you, there's a difference between putting together I guess, a guide or guidelines for responsible AI and actually putting that guideline to practice. I think that's the two major hurdles that companies are going to face. One is, what should this guide look like? This responsible AI guide, and then two, how do we make it so that we can put into practice? It's not just some document that's set somewhere that nobody pays mine to.

I'd be interested to see in the near future how companies start to adopt guidelines because I think that's going to be a big one but yes, let's peel this back. I like this, I like where we're going here.

GEORGE: I agree it's a great point. This needs to be a living document. A lot of times, you create policies and procedures in organizations for a really long period of time. This is changing so rapidly, you've got to be willing to change with it. This is the first page, so we're going to do five a page but the key thing and it's the subtitle on both of these pages which is our perspective is you want to foster innovation while managing risk. There still continues to be a tremendous risk in the marketplace of groups of people who are trying to shut down AI initiatives or limit the AI initiatives.

We think that just really damages the potential for innovation if that's what happens or it puts the power within a few companies who falls under some government regulation or something else. Let's start with the first one and we'll both chime in and comment, and think of these in a couple ways. Some of these might be foundational, they might be an overall approach that underpins your entire set of AI guidelines. Others are elements of those guidelines and we'll leave it to everybody to figure out where it fits for them.

We do have a final slide, you'll see some visuals we're not going to walk through any of them, but it just shows the wide diversity of how people have approached framing this out for their own organizations. The first one on this is privacy, pretty straightforward. People have rights from privacy standpoint and it almost ties with the second one, which is transparency. You need to understand what level of data and what information on you is being used both input and how it's being taken out of the AI systems, but you got to be transparent across what that is.

SANTI: 100%, and they do tie to each other. You're right. Privacy and transparency are related, even though they're separate items but they're intertwined. The transparency piece really stands out for me because not only are you supposed to be transparent with how the AI produce the outcome, but the AI has to be able to be transparent in its own explanation as to how it came up with the outcome. Transparency is key and it has to be based on trust. You have to trust that what you're presenting as transparency is truly transparency, and privacy, of course.

Nobody wants their data being misused. That's important across anything that has to do with the internet period. That's good.

GEORGE: There minimum rate, understanding where it's used, who has access to it, how it's used, if the regulations in the US around healthcare, and how your information is used there. We have full regulations in California that are different around some of the privacy stuff. We certainly have much more interest privacy regulations in Europe than we see in the US in most cases, so that's important. The next one on the list is continuous learning and improvement. The piece I look at with this, most is the system needs to continue to improve.

It needs to understand where it's made mistakes, where there might be a bias in its response, and some kind of input and some kind of oversight. Again, some of these tie together. There needs to be some level of human control as well. They're not going to balance as to what's happening.

SANTI: Yes. AI systems have different modes of learning, and so there's some modes of learning where the AI does it on its own. There is some modes of learning where it is in assisted learning, that's where also part of the human control comes in. We are feeding or we're teaching the AI certain things, and then there's other things. You have different modules, you have machine learning, you have large language models which are conversational with nature, and the more you feed them, the better they get.

However, at the end of the day, I agree, the AI can learn all its want all at once, but we humans need to control, not just the learning aspect, but the AI as a whole. Ultimately, we should be held accountable and I think that's one of the principles to what comes out of the AI and that comes out of the human control aspect too.

GEORGE: We've talked about this on previous podcasts. There is a strong human element and to a very great extent if not completely in today's world, these are tools, these are not complete job, generally, they're an element of a job, tool for a knowledge worker, be other AI automations in restaurants and other things, some of which are in place already. When we think about it from a knowledge worker's standpoint, this is a tool that needs to be managed, there needs to be human oversight, there needs to be some logical assessment of, "Did I get to the outcome I expected to get?"

Get to in that individual instance, and then we'll see some other ones on the next page that tie back to this a little bit to give some guide rails on safety and other things. Let's go to the final one on this list, which is intellectual property, and I see two sides to this. There's the element of what was ingested in the first place into the AI tool to create its effectively, its data set, and who owns what comes out of it. There are some elements where certain things can't be patented or copyrighted if they've come from AI-generated sources, thoughts on that element of it.

SANTI: I really think that this particular element, the intellectual property piece is still being explored. I think it's going to continue to evolve and change, and I think it's going to happen for a while because there's so much complexity to things like copyright trademarks who owns it. If I create a document using AI, like a ChatGPT, can you consider that my document? There's so many dimensions to the intellectual property piece. They all have different dimensions and they all have evolutions and stages they're going to go through, but this one in particular, I think is going to be a complicated one.

I'm curious to see over the next couple of years how it evolves and where we land. Yes, I think it's too early to put your finger on it just yet because it's too many moving parts, but it's going to be very dynamic I think for the next couple of years on that particular point.

GEORGE: This goes back to the earlier comment we made. This is an evolving beat at the moment. Over time, you're going to have to-- There'll be fewer changes two years from now than there will be two months now. It'll settle down a little bit. There's also this broader framework of, and we had a separate discussion around regulations a couple of weeks ago.

Organizations really need to tend towards a common set of guidelines, or at least elements of guidelines to get ahead of regulation and make sure they're doing the right things and building the right framework. I've realized there's some government initiatives in place where folks are being pulled in, but what concerns me there the most is that some of those businesses are self-serving. They create a-

SANTI: Of course, they are.

GEORGE: -level of fear in people that eventually benefits them and closes them off and gives them access or different access than somebody else.

SANTI: That can never happen, George.

GEORGE: Well, hopefully not, but we know it could. This is second set of five, so we'll walk through them in a similar way. Fairness--

SANTI: Oh, yes.

GEORGE: Some elements tie back to what we had covered in the previous slide, but it's around avoiding bias, really making sure that the systems are set up in a way that they don't produce an outcome that is somehow skewed by the input that set them up. It could be by an industry or a region, there could be a gender element, there's all kinds of things in there that could go wrong and get narrowly focused. There's also the ability, and we've seen this with ChatGPT, people can get ChatGPT to do things that might not have been initially expected and would be viewed as a negative outcome because they have a path, but--

SANTI: On the fairness piece, you would think it'd be straightforward to just kind of-- but if you look at something like a ChatGPT or Midjourney in its early stages, if you ask for let's say, make me a picture or an image of something, a lot of times, it will default to a specific gender or a specific race. I know some of us, we caught that early on and said, "Hey, why are we getting four, five images that are very specific in gender and race?" It's gotten much better. I see now that when you give instructions, it takes fairness into account and you're getting outputs that are more diversified. It's getting better.

Fairness is important. You know why fairness is important? Because if you want true adoption of AI, it has to be fair. Why would I use a tool that doesn't align, or that I can't connect or relate to? That's the bottom line, is like anything else, so yes, fairness to me is more important than anything for adoption. It's getting better. They'll get there. I think this is one that they can tackle on pretty quickly.

GEORGE: These next couple are equally important. In some cases may be even more important. This next one may be is the most important, which is accountability. Who is accountable for the outcome? What if you create a fully virtualized AI-based financial advisor and it takes my retirement account and invests it in some cryptocurrency, and I lose all my money in three days. Who's accountable for that outcome? Have they met the appropriate fiduciary requirements that an investment advisor would have? There's multiple other iterations of this. Do you allow medical diagnosis to be 100% made through AI and then who's accountable for that outcome?

SANTI: If we allow an out, like a hold harmless approach to to AI, I think that's going to be terrible because this is such a powerful technology and there's so much intelligence behind the response and the outputs you're getting that there has to be accountability. You can't have an out for something like this. I really think that, because if you do, then folks are just going to hide behind that, oh, you signed a hold harmless. The fact that your bank account is now wiped out, it's not my problem, that that ain't going to work in this scenario.

I think we absolutely have to hold the developers and the human factor accountable to the output for sure. The next one, reliability really is near and dear to my heart, as you know. That's--

GEORGE: We've seen in that sort of ties to accountability, right?

SANTI: It does.

GEORGE: We've seen in the bots we've developed using the Microsoft 365 platform, there are choices we make in it and we can choose-- There's three level of filtering effectively from-- it is 100% based on what it's grounded in. Grounding meaning, the set of data that it's looking at. Through the third level where it has the ability to make things up and even though it's grounded in a really narrow set, we've seen it start to make things up. That was a choice in testing we made to test different things, but you as the human element of this, have that choice. You need to be accountable for that outcome. You made the choice, you set it loose on the world.

SANTI: For sure. As we know, AI hallucination is a real thing. It really is. It really does. I can. If it's not reliable, it will make stuff up, that's not true. To me, that's a big one because it has to be reliable. Everything in technology has to be reliable or else it doesn't make sense to use. That's a good principle.

GEORGE: The next one is safety. It ties back to some of the comments we made already. Do you allow an AI-based tool 100% to make a medical diagnosis, and does that create a level of risk? I think clearly the answer is no at this point. Maybe there's a, 20, 30, 40 years from now, it got so much more data that it would be better than a human at doing that, but today, that's not the case. Think about multiple layers of, do I have an MRI done? The MRI is initially read by an AI-based agent that then goes to a doctor.

The doctor then also reviews the MRI and then comes up with a diagnosis based on data, but this goes back to the approach, your guidelines, your policies, your procedures. My preferred method, and I'm not a doctor, but my preferred method would be independently the doctor reviews my MRI, the AI reviews the MRI, then the doctor reviews the AI-based read of the MRI, not the reverse, because if they've read the AI-based review first, that may change the way they look at it. I would prefer two independent views of it, that then the human is the final decision maker in that case.

SANTI: In that scenario, you're talking about using AI in the context of that scenario. Using AI as a validation tool versus an automation tool. In other words, I do the work, but then I have the AI double check and see if there's something I missed.

GEORGE: Correct.

SANTI: I think that's phenomenal. That's where I think the specific applications are going to differ and change in the use of AI. I agree with that. I mean, the doctors should be doing what the doctors do best, but using the AI to double check, and give you some validation, in that context, that makes perfect sense to me. That's a safety net, so that would be a safety principle that you would put into your guidelines.

GEORGE: The final piece is around accessibility. We've alluded to some of this already. If a group or a small group of people's or companies or whatever, control the development of AI and the use of AI, that is not going to get us to the level of innovation that we want to be at. Other countries, and we mentioned this again in a previous podcast around regulations, if the US tightly controls it and other countries do not, as an example, these other countries have the opportunity to advance more quickly. They will innovate more quickly. They will have different levels of access rate.

Understanding their needs to be some boundaries on certain levels of access, and what things are used for. You still can get nefarious outcomes and bad actors--

SANTI: That's always going to happen. I think accessible also is about the culture. We go back to fairness. If, as a company, you're going to roll out AI, you got to make those tools available to everybody. You can't just say AI is going to be used by this department or by these set of folks because honestly, it's not fair. I think it should be absolutely accessible to-- That's from a company standpoint. From a social standpoint, these new technologies should always be available to folks no matter what class you're in. I think the future of AI is, this is a tool and a technology that's going to be for the masses, not for the future, 100%.

GEORGE: We've seen, and you need to be careful about how you approach things. Do you want to release all the tools on everybody at the same time? The answer's probably no. The approach we've taken is you and I in particular started testing some things. We've added other folks into the fold. As we're starting to develop the guidelines, we have an understanding of what's out there and where there are potentially risks. We'll build the guidelines around that framework of an understanding of the tools.

We're not sitting on these tools for a year or two to try to figure this out. We have done this in real-time. You rans effectively classes with our team to--

SANTI: We got more training coming. Absolutely.

GEORGE: Let me flip to this final slide. We're not going to individually cover anything on this. This is a screenshot from Bing. I went to Bing and searched for AI framework or something like that. I went to images to see what the images were. When we were planning out this podcast, we had thought that we need to provide a visual framework and we debated if we did, what's foundational. From an image standpoint, maybe the foundational stuff is on the bottom of the image. The other things are building blocks on top of it.

We had quite a debate about what belonged in each category, and then decided we're not at a place to present a frame, a visual framework at least. We understand those 10 elements we just went through. You can see from what's on the screen, and I realize some folks are on the audio-only portion of this, but there are guessing 15 of them on the screen, something like that. They're all different things. They're circles, squares, and diagrams with arrows.

SANTI: Not only that. We called out 10 individual potential principles for building a responsible AI guideline. Some of these have just four of them. Some of them have five. In other words, there is no standard. I think that's the message here, is that everybody's looking at responsible AI a little bit differently, but at the very core of each one of these, as you go across the screen, there's at least the principle of fairness, accountability, reliability, and privacy that you can see across. No matter who you look at, they have some sense of that.

There is no set standard. I think there will be at some point, I think it's like anything else. Somebody's going to come up with that right balance, and everybody's going to adopt it as the official. There might even be a certification or something that a company goes through to show that they're being responsible. That's going to happen at some point. There is no set framework. I think [crosstalk]

GEORGE: Clearly, drivers by industry. Healthcare needs to get ahead of this. Healthcare needs to get a group of healthcare providers together and come up with a framework that addresses the 10 things we identified, and probably many more things or subcategories of those that everybody agrees to, there's a level of transparency. I probably read it once long time ago, some HIPAA privacy thing that I've signed. I worn that aren't necessarily going to read all of it.

If there is a simple, straightforward framework to understand, what AI-driven data is coming back to them, what advice was based on AI, how much of their data is being used for other things, all those kind of things. At least having a high-level understanding of that would be helpful. That's a case where a simple, straightforward framework that's patient-facing probably makes a lot of sense. Maybe the framework that the companies use to develop is more complicated.

SANTI: Listen, this is revolutionary. This is new. We are watching these things unfold before our eyes. Let's face it. Well, this brings our episode to an end. As always, please remember to subscribe to Tech UNMUTED on your favorite podcast platform. You can also follow us on YouTube. Until next time, remember this, stay connected. Take care.

CLOSING VOICEOVER: Visit www.fusionconnect.com/techunmuted for show notes and more episodes. Thanks for listening.


Episode Credits:

If you want to give shout outs to specific people who helped with the episode.
Fact-checking by: Joe Jimmy Jim
Additional Video Editing by: Some Famous Person
Produced by: Fusion Connect

2023 TMCnet Best Tech Podcast award winner
Tech ROUNDUP

Expert insights, exclusive content, and the latest updates on Microsoft products and services - direct to your inbox. Subscribe to Tech ROUNDUP!

Tech UNMUTED, the podcast of modern collaboration, where we tell the stories of how collaboration tools enable businesses to be more efficient and connected. Humans have collaborated since the beginning of time – we’re wired to work together to solve complex problems, brainstorm novel solutions and build a connected community. On Tech UNMUTED, we’ll cover the latest industry trends and dive into real-world examples of how technology is inspiring businesses and communities to be more efficient and connected. Tune in to learn how today's table-stakes technologies are fostering a collaborative culture, serving as the anchor for exceptional customer service.

Get show notes, transcripts, and other details at www.fusionconnect.com/techUNMUTED. Tech UNMUTED is a production of Fusion Connect, LLC.