What: What can we expect from AI and Chatbots in the next few years? A Newswise Live Event
When: Wednesday, March 15, 2023, 1 PM to 2 PM EST
Who: Expert Panelists include:
- Sercan Ozcan, Reader (Associate Professor) in Innovation & Technology Management at the University of Portsmouth
- Jim Samuel, Associate Professor of Practice and Executive Director, Master of Public Informatics at the Bloustein School, Rutgers-New Brunswick
- Alan Dennis, Professor of Information Systems and the John T. Chambers Chair of Internet Systems in the Kelley School of Business at IU Bloomington
Details: Artificial intelligence news has escalated considerably in the last few months with the roll-out of Microsoft's Bing Chatbot and the popularity of large language models (LLMs) such as ChatGPT. Popular social media app Snapchat has launched its chatbot called "My AI," using the latest version of ChatGPT. Newswise Live is hosting a live expert panel on what to expect from AI in the near future, its impact on journalism, and the corporate race for AI dominance (Google vs. Microsoft, etc.). Panelists will discuss what we can expect from AI and Chatbots in the next three years.
Notable quotes from the panelists:
Jim Samuel: I'm expecting in the years ahead, a lot of jobs to be redefined, a lot of jobs to be replaced by AI as companies realize that instead of a team of 40, they could just use generative AIs plus a team of specialists of maybe 10 people. And you would get the same level of output of probably more than if you were to use only 40 humans.
Alan Dennis: The thing we really should be worried about with AI is profound bullshit, stuff that is seemingly good - and if we really don't pay attention, yeah, we'll believe it. That's the danger in AI, is that it's just going to bullshit its way into everything. And in case you're wondering, bullshit is a technical term. We use it when we talk about analytics models that have gone wrong. So I'm using it in a very narrow technical sense.
Sercan Ozcan: There are many articles that are generated with the benefits of ChatGPT. When you read some of these things and I came across some of them, it seems, it's very realistic, the knowledge is very realistic, but sometimes it is not. And what we call this is a hallucination of ChatGPT - it gives information to you that it looks very realistic, but it's not.
Alan Dennis: Deep fakes and other tools like it is going to change everything, particularly for journalism because we've created digital puppets of several different celebrities and I can make them say anything that I want them to say. And it's really not that hard to do.
Jim Samuel: We cannot and we should not expect AIs to run by themselves. That will lead to chaos.
Alan Dennis: I think everybody will have an AI personal assistant that looks and sounds like a digital human. Like Siri today, there'll be a face and a voice on it. Whenever we do a Zoom call, if I'm a manager, I'm going to bring along my AI assistant and just drop the assistant on the Zoom call. And all of the AI is going to manage all of these low-level tasks.
Alan Dennis: Let me suggest that companies should think about AI today as an inexperienced teenager. So if you would let an inexperienced teenager give you medical advice, go for it. If you wouldn't, you might want to think twice about putting an unsupervised, inexperienced teenager into your decision-making.
FULL TRANSCRIPT
Thom:
Hello and welcome to today's NewsWise live event. We have an expert panel to discuss artificial intelligence. ChatGPT and image generators like MidJourney are making a lot of news as new versions of these tools come out and the public as well as the media and business people figure out how they can integrate these tools into our lives, and into our work.
Here to help suss that out and figure out what to expect from these new technologies, we have these experts who are ready to answer any questions that you have. Dr. Sercan Ozcan, he's a reader and associate professor in innovation and technology management at the University of Portsmouth. And Sercan, if you could tell us, please describe where you think we are in this new sort of industrial/ technological revolution and obviously the waves, the disruption that it may make here in the short term, but what else can we be looking to in the long term for how these tools will possibly change a lot of our lives and the way we work and a lot of other factors of daily life?
Dr. Sercan Ozcan:
Well, thank you, Thomas, for having me. Yeah, well, I think we had a lot of progression in AI technologies until now. But with especially ChatGPT, it's gained a lot of popularity. With that popularity, I think there will be an acceleration in the development of AI. Looking at ChatGPT, I think looking forward, there will be a lot of ChatGPT-related applications, and it will diffuse into many sectors. So we will be seeing a rapid progression of ChatGPT-type solutions and that will be - because as a result of the fact that AI technologies or ChatGPT are general purpose technologies, and general purpose technologies such as - internet, and electricity, diffuse into different sectors, and they continuously develop and basically impact the way we live. Before the internet, we were not talking about digitalization and the digital world. So well, there may be a lot of positives and some negatives. But I think in the next decade or so, we will be adjusting the way we do business and live according to our engagement with AI.
Thom:
I'd like to bring into the discussion here, Jim Samuel. He's the Executive Director of Informatics Programs at Rutgers at the New Brunswick campus. Jim, thanks a lot for joining. And tell us, what are your thoughts similar to what Sercan was saying? How do you feel that people are generally accepting these advances in technology? What do you think is behind what a lot of the hype and novelty is right now? And what do you see coming behind that as it fades? What will the real implications maybe be in the coming weeks, months, and years?
Jim Samuel:
Yeah, I think we are in a very interesting phase. We're kind of experiencing some kind of romanticism towards artificial intelligence technologies. For those of us who have been working with NLP for over the past decade, large language models and chat pods have been around for quite some time. But in November of 2022, something changed in this that Open AI released ChatGPT, which was very user-friendly - and all of a sudden, the entire world woke up to this whole space of generative AIs and what generative AIs can do.
So I think what we are seeing now is, I'm not sure if I want to call it hype, but there is a lot of excitement as people have discovered what is it that AI can do.
A couple of thoughts there. Number one, yes, AIs are very powerful. And they have a significant utility value. So for one, for example, I'm expecting in the years ahead, a lot of jobs to be redefined, a lot of jobs to be replaced by AI as companies realize that instead of a team of 40, they could just use generative AIs plus a team of specialists of maybe 10 people. And you would get the same level of output of probably more than if you were to use only 40 humans. So those kinds of changes are to be expected. On the other hand, what is also happening is people are recognizing the limitations and the risks of using artificial intelligence. There's been a lot of caution. Some companies like Google, which was at the forefront of developing these technologies, exercised more caution than some other companies who rushed to kind of release their products and try to monetize it as quickly as possible. But people are realizing that though the output is interesting, ChatGPT, for example, it may contain bias. It contains inaccurate information. There have been cases where sources were requested and it kind of manufactured the sources. And that is to be expected once you understand the nature of large language models, the nature of the underlying foundation models that go into producing these sets of applications.
So to summarize, I think the future is bright with artificial intelligence. Artificial intelligence is not going away. We should all get ready to accommodate and work alongside artificial intelligence. At the same time, we have to be cautious in terms of the role we foresee for artificial intelligence. The main thing that I want to emphasize at this point is that artificial intelligence do not have a sense of meaning. They do not possess intrinsic meaning the way humans do. They give the impression. They create an illusion that they are speaking meaningfully, but all the meaning, the power of meaning resides with human intellect. And as long as we keep that in mind, I think we will be on a safe path to making productive use of artificial intelligence.
Thom:
Welcome to the conversation now, Alan Dennis. He's a Professor of Information Systems. And he's also the John T. Chambers Chair of Internet Systems at the Kelley School of Business. That's at Indiana University in Bloomington. Alan, thanks for joining. As Jim has brought the conversation here to a little bit of that optimism, do you feel like AI tools are there, that they can be incorporated into work and life in ways that are useful? Or are there still some advances before many of those things are going to be realistic? What's your view of where we stand with it now?
Alan Dennis:
Oh, I'm going to say, I agree with Jim. I think we're at the dawn of a new era. Today, we look back at 1993 as the start of the internet age. It divided two parts of our history. I think 50 years from now, historians will look back at 2022 and say, that was the dawn of the AI age. And we'll divide the world into the AI age and the pre-AI age.
Now, it took us two, three, four, five years to really understand what the internet could do and to change the way we worked, lived, played, and learned. Same thing with AI. Are the tools there today? Definitely not. But they show us a little glimpse of where the world will be in five years' time. So I'm very optimistic about all the wonderful changes that AI will bring. And I'm also very worried about all the problems it's going to create.
To paraphrase one of my friends, Gord Pinnacutt, the thing we really should be worried about with AI is profound bullshit, stuff that is seemingly good - and if we really don't pay attention, yeah, we'll believe it. That's the danger in AI, is that it's just going to bullshit its way into everything. And in case you're wondering, bullshit is a technical term. We use it when we talk about analytics models that have gone wrong. So I'm using it in a very narrow technical sense.
Thom:
Mary would like to know, how can organizations responsibly adopt these tools. So thoughts about what are the responsibilities, what are some of the ethics involved in how we incorporate these things? Sercan, any thoughts about that?
Dr. Sercan Ozcan:
Well, we need to be careful with AI solutions because there are a lot of ethics-related concerns. Because the way AI makes decisions and humans make decisions is not the same. Because for instance, in our current roles, we're trying to be as inclusive as possible. When we make decisions, we don't only make decisions considering the fact that what is right and wrong. But AI, in its automated decisions, may make decisions not as inclusive. For instance, considering, say, certain minorities or certain conditions.
So basically, AI may not be as humanistic as we are when we make decisions. So this is one of the biggest concerns. When we basically develop AI solutions, we need to have solutions that embed these type of conditions.
Thom:
Jim or Alan, any thoughts on the responsibilities and ethics?
Alan Dennis:
Let me suggest that companies should think about AI today as an inexperienced teenager. So if you would let an inexperienced teenager give you medical advice, go for it. If you wouldn't, you might want to think twice about putting an unsupervised, inexperienced teenager into your decision-making. So my initial advice is to proceed slowly, look at the low-level tasks that people don't like to do, and drop the AI in there first. In the same way that whenever you would deal with a rookie employee, you'd monitor them. Same thing with AI. Somebody should be monitoring the AI just to help it learn, to coach it when it makes mistakes and to understand how far can we let this AI go in giving advice.
Thom:
I want to add one element to the question here and ask Jim your thoughts and the others as well. Should companies be looking to purchase these kinds of tools out of the box, or should companies be looking at investing in developing their own machine learning? Jim, what do you think of that? To build it from scratch or buy it from other vendors?
Jim Samuel:
Well, artificial intelligence is the way they are working today. We have this concept of emergence and homogenization, which in very, very simple words, means they're going to see some core foundation models which have been trained on a whole lot of data, a very expensive process, requires a high level of expertise, and that's going to form the foundation of a lot of the AI applications that we're going to see come up.
So that part of it, I think everyone agrees that for the immediate future, we are not going to see every single small company or even a mid-sized company developing their own foundation model parallels.
What we will see is more and more companies making use of these base foundation models, which power the knowledge of the earth, so to say, and then based on that, to build specific applications. So that's the homogenization part of it, where we have got these large models, which can be used for very specific applications, and between these foundation models and the use in specific applications, you can have a number of steps. Could start with fine-tuning based on your own data, then you could use things like reinforcement learning-based human feedback, which is basically a process to say, okay, we have these models, but these models may have some strange behavior and produce undesirable output. So let's hire 2,000, 5,000, 10,000 people to just provide feedback to very specific questions. And now you have moved on from unsupervised learning or self-supervised learning, which is what the foundation models are based on, to something where you have supervised input. I believe, this is my suspicion, of course, I have not had a chance to look into the insights of ChatGPT, but I believe ChatGPT and similar applications in the future will not only do fine-tuning and steps like reinforcement learning with human feedback but also have additional expert rules and knowledge-based layers to filter the output.
I believe that's more of a risk management exercise, but that's where most companies will be. So foundation models, are common. After that, what happens is going to depend on a couple of factors.
Number one, what kind of technological expertise does a company or a corporation possess?
Number two, what level of engagement or what's the value creation potential that they foresee because all of this is going to cost money. So if a company wants to spend very little, they are just going to have very little customization. They will limit the input-output, similar to what I think Spotify did with my AI. They kind of limited it - It's like a lightweight customized version of ChatGPT. So you can see things like that happening. That's my take on it.
Thom:
Another question from the chat freelancer, Robert Adler asks, how hard is it to deal with these kinds of BS responses from chatbots? Convincing but made-up information. Is it possible to fix these? Is that the kind of guidance that Jim is talking about? Alan, what are your thoughts on that question?
Alan Dennis:
Yeah, as Jim said, we can handcraft some of the knowledge, but the base knowledge is probably going to be reasonably good. So the first place I would look at deploying these is the simple low-level things that are the most common and that human employees don't really like.
As we move into the more advanced stuff, it's going to be a lot harder for the AI to give you correct answers - as Jim said, handcrafting some of this, adding in some detailed technical knowledge. That's going to be the hard part that's going to need the most supervision. And like everything, it's good to approach any AI answer with healthy skepticism - and if it doesn't sound quite right, you might just want to think twice about using it.
Thom:
Question for Sercan. Do you think that AI will contribute to misinformation and fake news being spread online or otherwise? What do you think the influence of AI might be in that area of media concern?
Dr. Sercan Ozcan:
Well, it is possible because many individuals and companies are rushing to use this and they are even creating jobs specifically for the usage of these tools. And there are many articles that are generated with the benefits of ChatGPT. When you read some of these things and I came across some of them, it seems, it's very realistic, the knowledge is very realistic, but sometimes it is not. And what we call this is a hallucination of ChatGPT - it gives information to you that it looks very realistic, but it's not. Following what Jim and Alan said, I think in the future, we will create an interactive layer with humans where we will be fixing these types of problems. I think that will be the main solution in the future until we reach to maybe super intelligence.
Thom:
Great points, please Alan go ahead.
Alan Dennis:
I also studied misinformation as well as AI. And to me, that's the part that worries me the most. I studied digital humans, very realistic-looking and sounding AI. Or if you prefer, I could use the word deep fakes. Deep fakes are one version of the technology. There are many other competitors that build products in this space. Deep fakes and other tools like it is going to change everything, particularly for journalism because we've created digital puppets of several different celebrities and I can make them say anything that I want them to say. And it's really not that hard to do. And at this point, if I look carefully, I might be able to tell, but I have to look really carefully. So my advice to the media and to journalists, if somebody gives you a really hot video of a celebrity saying something, yeah, think twice, cause it could be faked and you're not really going to be able to know the difference. That's the real danger for misinformation here I think.
Thom:
Yeah, very interesting. Take that question a little bit further. Like Jim, if you could comment on your thoughts about having AI write articles? And so you've talked about what are some of those safeguards or guidance that the human element still needs to apply to what AI can generate. What would be your thoughts about media using AI to generate news content? What's the role of people as editors and fact-checkers in that? And what do you think that means for the media landscape?
Jim Samuel:
Right, and I think I saw one of those questions about that in the chat as well. I'm going to borrow the phrase that Alan used. That is we need to treat AIs as some kind of very smart, but inexperienced and probably not very, not comprehensively knowledgeable teenager. I like that way of thinking.
What that means are the outputs that AI produces need supervision. They need human oversight. And I'm a strong believer that all AI output should always constantly be under human oversight. We can use artificial intelligence. If you want to write a 1000-word op-ed, you can use AIs to generate 3000 words, but ultimately it must be the human expert who goes through every single sentence, validates it, make sure that the information is correct, and then finalize the article.
We cannot and we should not expect AIs to run by themselves. That will lead to chaos.
I also want to just comment on what Alan spoke and I agree with both Alan and Sercan on misinformation. My only difference is I think it's more of a philosophical position. I think that misinformation is a part, of increased misinformation, misinformation on steroids, turbo misinformation, whatever we want to call it - that is going to be a part of our future. The solution to it, many have suggested is censorship. I'm against it because censorship means somebody decides what is right and what is wrong. I think I would rather fall back on the wisdom of the crowds. People and the masses, common people have a way of learning. The experts, especially the media and educators, have a responsibility to educate the public, but the public must be allowed to experience misinformation and develop internal mechanisms to deal with misinformation. Just like Alan advised reporters when a video comes along, you need to check. That same mindset - it's education. It makes no sense to say finance is complicated, therefore we will make all financial decisions for the people. No, the path is you educate everyone on finance.
Similarly with artificial intelligence and information generated by artificial intelligence. We need to teach people what is misinformation, how can artificial intelligence amplify and multiply exponentially the whole misinformation paradigm? And how do you deal with it? How do you react to it?
Thom:
This brings up some great points about the transparency and oversight of these tools. There's a follow-up question from freelancer Robert Adler about ChatGPT and whether or not these kinds of tools can be given the ability to estimate their confidence in the answer that it's given you. And Robert reports that as of now, it is told that it cannot do that. Sercan, is that something that you think would be beneficial if it was possible for a chatbot to say with some degree rating its own responses in terms of that certainty? Is that something that we could teach ChatGPT to be aware of?
Dr. Sercan Ozcan:
Well, it would be of course useful for the reader or the user to know how confident ChatGPT in that response. But I mean, like following what Jim said, I think we will in the future create a layer where we will give more confidence to the responses of ChatGPT as we progress. And I think that can come from the interaction that ChatGPT will be having in the future with humans. And there will be continuous learning.
Considering the question, it could possibly create a system where ChatGPT has an accuracy type of result for the response that is generated. In data science, we have models. When we have results, we sort of know how accurate we can be based on the results that are generated. A simple, maybe similar layer could be generated for ChatGPT as well.
Thom:
It seems like some system where the chatbot has to still cite it. could be very informative for that level of skepticism and media literacy that the user can engage with that. I'd like to toss to Alan for further thoughts about that. And another comment from freelancer Mary Branscomb asking about whether this is the real crux of the issue of areas where the human user may not know enough to be skeptical of the response. What are we left with in that situation, Alan? And how do we have literacy about the responses we get from a chatbot?
Alan Dennis:
I'll say most humans are what we call cognitive misers, which means we don't like to think. We're very careful when we spend the effort to think. We did a study where we used a piece of technology from a company called Soul Machines that cited sources every time it provided information, and about 5% to 10% of our users wrote on surveys saying they hated this because it cited sources. They just didn't want to know that stuff.
So you could flip it the other way around and say, well, that means most people like the sources. I guess I'm not as optimistic as Jim is about training people to think about misinformation or training people to think about AI because I think many of us go through our daily lives trying not to think. I just want an answer. I don't want to have to spend a lot of effort to get it. And if I get served up this answer, I'm probably not going to think about it unless I have a reason to think about it.
So if it's something important, yep, I'll take time. I'll think about it. I'll double-check. If it's something unimportant to me, then I don't know whether I'm going to invest the cognitive effort needed to figure this out. So to me, it comes back quite a bit to context, and that's why my advice to companies is when you first put this in, put it in stuff that doesn't matter very much because you, the company, need to learn. Your users, we consumers, need to learn. And the best place is the stuff that doesn't matter as much.
Like medical diagnosis - I don't think I'd start with that. Start with something simple like customer service or giving advice about, hey, do you like this kind of tea or that kind of tea? I don't know. Something that's not quite a big decision.
Thom:
Very interesting point. Sercan, please add to that. Just to add what Alan said, also my recommendation would be for companies to have pilots instead of scaling it up to a big number of customers and all the regions and all the functions of the businesses. I agree with Alan. They should start small and they should start with pilots. And they should see the outcomes and learn from it. And then if the results are positive, then they should scale it up to other functions and other regions and other parts of the company. It seems like there are many interesting parallels, as have been noted already, to the earlier days of the internet now 30 years ago. I'm reminded, of course, as well as many of you might be of social media and how there were some bumps in the road in terms of adoption and some generational divide over things like susceptibility to misinformation and sharing that misinformation. I wonder if you think that there are some generational elements that play, Jim. And I have some further context to that with a question from Leslie Mertz, a freelancer, about young people navigating the job world, their careers, where they may be incorporating AI from day one that eventually replaces most or all of their job. What are your thoughts about the generational impact with people entering the workforce who might be maybe more open to and more literate with dealing with AI, but ultimately their jobs could be on the line because of it? Quite an interesting paradox.
Jim Samuel:
Yeah, I think there are a couple of dimensions or a couple of perspectives you could look at for that question. I recently just finished writing an op-ed, which is looking for a house to be published in. And the two questions - this is not yet published, but the two questions are, number one, what is it every young, especially students and early-stage career folks, those who are looking to build their career, they need to ask two questions.
The first is, what is it that I can do that AI cannot do?
And the second question is, how can I do my job better and faster with AIs?
That comes with a caveat. If you don't answer these two questions, then you will be in a space where you're going to get into a conflict with AIs. What I mean by that is artificial intelligence applications will be able to produce certain outputs. If your job is largely one of producing parallel outputs, then you are trying to fight against artificial intelligence.
That is like trying to stop a running train with your hands or something. They're not going to succeed. You can't type as fast as the AI. You cannot hold 40 terabytes of knowledge in your head. It's just simply not possible. You need to figure out the answer to the two questions. What is it that you're better at in your workplace? If you are a news media professional - and this connects to another question I saw in chat. If you're a news media professional, what is it that you're doing currently? What are applications like ChatGPT able to do? And then answer the question, what is it that you can do over that and better than that? And then answer the question, how can ChatGPT help you do your work?
If you answer these two questions, in the visible future, you will remain a very valuable and high-demand person in the workforce.
Thom:
Alan, what are your thoughts about that question with regard to careers that, if they integrate AI successfully, have somewhere to go versus those that maybe leave AI on the table?
Alan Dennis:
So one of the things my colleagues who are university professors are concerned about is, what happens when students use AI tools to help them do assignments.
So this is one of those things we're wrestling with. So how do we catch them? And often, when students cheat, it's really obvious. So one of my colleagues says she always starts every assignment with, in your opinion. Because right now, one of the verbal tics that ChatGPT has is it says, I don't have an opinion. But if I did, it would be this. She said, you'd be surprised how many students just copy and paste, I don't have an opinion, but if I did -
So I'm not as worried about harming students who cheat because we'll catch those. I'm more interested in trying to help students understand, how do I incorporate ChatGPT and some of the other new AI tools into their learning. So if we go back, I don't know, 30, 40 years, when I was a student, one of the big debates was - do we let people have calculators?
And today, it's not much of a debate. Yeah, students just have calculators. But they don't know how to do arithmetic. Yeah, they don't get over it. That's what the box is there for.
That's how we have to think about incorporating AI. The AI is a box. It's a new tool. It's like a word processor. Think of what we used to do before Word.
I can, because I've typed essays on a typewriter before, believe me. That was, ugh. So I am delighted we now have word processors. I think we're going to see the same thing with ChatGPT and the new AI tools, where the first draft is going to be written by the AI - and our role or I shouldn't say our role, maybe your role as journalists will be to fine-tune that initial draft.
And it also makes me wonder, one of the things you said, Tom, earlier on when we were preparing for this, got me thinking about what is journalism going to look like in the ChatGPT era? And one of the things I began to wonder about is the pre-internet age, media was pushed. If I wanted to watch something on TV, I turned on the TV and I got stuff pushed up.
The Internet age is all-pull. I wonder if we're going to see the same thing with ChatGPT. I'm not going to tune into a news show or read a newspaper because that's pushing content at me. I don't really care about that.
What I want to be able to say, you know, today I'd like to know a little bit more about, oh, how did this happen? Or what was the latest runway incursion at an airport? So my news media will be ChatGPT written on the fly, not pushed at me.
I don't know. I think we're going to see a huge change in just about every aspect of the work in the next 10 years.
Jim Samuel:
Yeah, you know, I can't resist the temptation. And I think I'm in Alan's camp when it comes to ChatGPT in academia. My position is very simple. We learn from history. There was a time when people complained against typewriters, people would lose their handwriting, and calculators, people would forget math, and computers, people would become stupid, and the internet –
Thom:
Citing online sources in a paper?
Jim Samuel:
Exactly. There are all kinds of, we've been through all the arguments. My position is simple. What is the role and responsibility of an educator in every educational institution? It is to prepare the students who are paying you money for the future. What is the future going to look like? The future is going to be filled with artificial intelligence. I think every professor, I have done it this semester, one course, mandatory use of ChatGPT. You have to use ChatGPT. You don't use ChatGPT, you fail. And the other course, I had already released the syllabus before I could do that. So I said, it's highly encouraged. And I developed a framework for students to use and engage artificial intelligence, not just ChatGPT. All generative AIs are welcome in my class. Because every educator and every educational institution must prepare students for the real world, not for an imaginary world where there are no AIs. And professors have to change, educational frameworks have to change, and educational institutions have to change to the changed reality that Alan spoke about since 2022 - 2023. We live in a new world. And academia needs to adapt. We need to make sure that our students use these artificial intelligence applications and become experts at it, and the institutions and the faculty who do that better will be fulfilling their responsibility to prepare the workforce of the future.
Thom:
Sercan, what are any positives you see that AI can provide, especially in journalism, to enable more productivity, perhaps, per writer or whatever other kinds of elements you think could actually be positives?
Dr. Sercan Ozcan:
Well, I have a lot of thoughts about the negatives, but on the positive side, I think there are a lot of routine works that in media, many individuals involved in. So I think solutions like such as ChatGPT can help with some of the routine work, searching for information, gathering information, and then the final editing and bringing the story to life could come from the individuals. So, similar to any sector, AI makes us more efficient, and I think AI can make the media more efficient in their tasks and so those who don't adopt solutions like this may even lose a competitive advantage. I think when we look at this question, it's all about survival. I think looking ahead, those companies who don't use AI, it's not going to be a luxurious position, it's going to be a must to be able to offer the same level of service. So that's what was the case in the past, in history. The companies that didn't adopt technology, lost. Think about digital cameras, think about the transition from the floppy disk to CDs into the MP3s and cloud technologies.
If you don't adopt technology and if you resist it, if you try to stop its progression, you will fail. So it's not about being bad, it is going to be a must technology for you.
Thom:
Let's turn back to the classroom and there's a really interesting question from freelancer Leslie Mertz in the chat.
How about AI instruction being used to teach students? Are your jobs at risk, professors?
Dr. Sercan Ozcan:
Well, I think Alan and Jim already gave very interesting ideas about the way we will be moving ahead with regard to our assessments. I agree with the things that they said, but we need to be very careful. When we think about education, we don't just teach students a certain topic. We teach them how to criticize things, how to criticize knowledge, how to gather knowledge yourself, and generate original ideas. And so currently, for instance, in my modules, I have assessments that cannot be just generated from chat. I give tasks to students where they have to think sort of with themselves and criticize the knowledge and come back with a critique to the task that I give. So it's not easy for a ChatGPT to generate. So we will adapt, like the media will adapt and the professors will adapt. We will adapt with the assessments that we give to the students.
And with regards to the delivery of the sessions, well, we already are in a transition. We have recorded videos, and lectures. We are in a transition already. We are in a digital age. Thanks to COVID, this process is already accelerated. Many of my lectures are already recorded. It doesn't have to be ChatGPT. I am already providing all my recorded lectures to my students. But what I do is instead of going to a lecture room and repeating the same things to my students, instead, I have more interactive practical sessions in the classroom. So I adapt already to the things that will be coming from ChatGPT. So, for instance, students with ChatGPT may not be able to perform the same things we do, engaging with me and their classmates in the classroom. So I'm creating scenarios that are more practical, for instance, that cannot be just alone delivered by ChatGPT.
Thom:
Such great points that there's a human element that just cannot be understated. And the more the human element is free to devote its time and resources to those valuable things, that AI can replace the less valuable things, the better. Such interesting points, Dr. Sercan.
Thank you. Alan or Jim, anything you want to add to Sercan's points?
Alan Dennis:
I'll jump in and just say there is a wonderful quote on the side of the School of Education building here in Indiana. I'm not going to get it quite right, but I'll get it close. That says the job of an educator is not just to educate. It is to motivate. And that's one of the things that I think is different about a human. I think humans are much better at motivating other humans to undertake the difficult task of learning. Because let's face it, learning is not easy. We provide motivation. And yeah, we do all the regular stuff. AI can automate some of the regular stuff. But to me, as you said, Thom, it's the personal aspect. It's the human-to-human nature of education that AI is just not going to be able to replace.
Thom:
Jim, any thoughts on this question?
Jim Samuel:
I'm probably going to, again, piggyback on Alan. My apologies, Alan. I'm piggybacking a lot on you today. But another quote, which I have used for years in my teaching statement because I felt that related to my teaching style. And the more I thought about it, I liked it. And now it fits very well when I try to explain what's the difference between an AI versus a human teacher. The quote, I forget who the quote is from, but it was something like, education is not as much the filling of a vessel as much as it is the sparking of a flame. So what artificial intelligence can do is fill the vessel. They just dump knowledge. But what human faculty, what we are able to do is we are able to inspire, motivate, and the whole, I like that picture, light a flame. And we set students on a career path. I mean, I've had so many students who were either confused or on a path that was not really aligned with their skills. And I was able to work with them and they're in very successful careers today.
Dr. Sercan Ozcan:
And I think although there's a lot of negativity in academia, I look at it positively. Maybe professors, we will evolve in the way we deliver and do things, and how we engage with the students. I mean, maybe some lecturers were not doing a good job until this point anyways. Maybe they were giving students questions that were quite repetitive and we were not pushing students to think in a different way. And so it's going to help us to be better professors.
Thom:
It reminds me of how smartphones in the smartphone era, it's been a way of outsourcing our own brain power. We no longer have to remember phone numbers, right? Like we had to do in the 80s. What are some of the things that AI allows us to similarly outsource from our brains so that we can focus and devote those other things? We're going to be navigating this realm for some time, it sounds like you're all saying. And I think it's fascinating. Thank you for your thoughts about that.
One, maybe a good question to wrap up here from freelancer, Robert Adler. Does anyone care to look 10 years ahead and give an idea of where we might be in terms of how integrated AI would be in our lives? And does it look like that trajectory is sort of foretold at this point? What are your thoughts, Alan?
Alan Dennis:
I think everybody will have an AI personal assistant that looks and sounds like a digital human. So Siri today, yeah, there'll be a face and a voice on it. Whenever we do a Zoom call, if I'm a manager, I'm going to bring along my AI assistant and just drop the assistant on the Zoom call. And all of the AI is going to manage all of these low-level tasks.
People are often concerned, well, is AI going to be in control? And I'll say, well, were you concerned that AI is in control when you use your GPS to drive your car? Or are you concerned when AI drives the car? Okay, maybe you're concerned about that.
But think about various times we've turned control over to AI and we take direction from AI. I think the role in our personalized, professional lives, what we do for fun, and how we learn, AI is going to take a much bigger role. And I'll be honest, I can't foresee the future other than to say, it's going to look really different.
If I could foresee the future, I'd tell you which companies to invest in right now.
Dr. Sercan Ozcan:
Well, I think we will be in a stage where human and AI interaction is inseparable. And we will be, I think, if we are not at this point, we will be in the process of singularity where this interaction between us and AI will be irreversible, I think. And so, in many of our activities in society or businesses, I think it will be a collaborative process with AI and if not in every place. For instance, think about smart glasses. I mean, now they're being used in workplaces where workers' errors are being tried to minimize with the support of these smart glasses. So, I think AIs will be there for efficiency and to make processes easier for humans.
I think we will be relying on AI in many aspects. And looking back at that point, I'm sure many people will be saying, oh, AI is influencing humans' biology and brain in certain ways like they are now criticizing the engagement of children with tablets and computers and things like that. I'm sure there will be a point where our engagement with AI will be criticized.
My worry is that, though I don't know about this, I think we need to look at the developments in a couple of months, maybe a year. And I'm referring to the policies and laws. I just would like to go back to what Alain mentioned. For instance, when we think about unmanned vehicles, we are blaming the AI when considering the actions that the AI takes - For instance, considering the law in the UK here, if there's an accident that is done by a vehicle, the manufacturer is responsible for that action. So when we think about AI, that is already a blame culture and a situation where we take AI responsibly. So let's turn the table around. I think AI maybe, we may give credit to AI for its positive actions. So we give, we take, we blame them for their negative action and we may give positive, we may give credibility to AI.
So we need to be careful here because imagine companies such as Open AI with their solutions, such as ChatGPT, having this sort of oligopoly in the intellectual capital in the world. I think that we need to be quite careful about that. And I think I'm quite curious about the solutions that the governments will be coming up with because on a mid-level, the publishers already have a group - they don't accept ChatGPT's outputs yet again, which we have co-authored books on Amazon and articles, and the media use it. Our students possibly are using it. And so we will, I think, need to take an action. Otherwise, we may have some major companies controlling the intellectual capital that is generated by AI.
Thom:
I'm going to ask Jim to take us home with your thoughts on this question. Looking out over the course of the next decade, do you see more problems or more benefits? Do you see that interaction and integration with AI are benefiting mankind or causing division and gaps in things? What are your thoughts?
Jim Samuel:
This a very difficult question to answer. I cannot even answer in terms of probabilities, but in terms of possibilities, it is one thing that we are going to see is a huge range of AI applications being released. It's a gold rush and every company is going to rush to it. And there's no stopping that.
In terms of the effects, I think it's going to be a survival of the fittest, which means some of the applications are just not going to work out. Probably a majority of applications won't work out and we'll see them drop by the side.
Over a period of time, like for example, Google announced yesterday that it's releasing an API for its foundation model, which is, I think, very powerful because it incorporates text, images, and code. So you've got all the dimensions that we are currently seeing being made available by Google also.
We will see chaos in some sectors. We will see rapid advances, especially in medicine and other places where there is a stronger emphasis on responsible AI, a stronger emphasis on risk management and safeguards are being implemented. I think that's the place where we will see really beautiful results.
In terms of society as a whole, it could be chaotic. I anticipate at some point, maybe not in 10 years, maybe 20 years, just like we had the digital divide, I see classes being created. In one of the articles that I started writing, I outlined about seven classes. But to summarize it briefly, it's going to be a group of people who are going to be negatively impacted by AIs. These are the people who may not be able to keep up, are replaced by AIs, or have to work under artificial intelligence, a separate topic. And then there is the class of people who are experts. They will be augmented and supported by AI. They will be the ones who will be creating the AI applications, working on developing these AI, and so on and so forth. And also at the top of the pyramid are the owners of this artificial intelligence who will wield extraordinary power.
The greatest danger I don't think is that AI is going to crash everything. The greatest danger I think is the wrong people in positions of power or artificial intelligence using artificial intelligence for belief manipulation, that is, manipulating the way people think and behave.
Thom:
With that, I think that's all the questions we have time for today. I want to thank Dr. Sercan Ozcan, Jim Samuel, and Alan Dennis. Have a great rest of your day.