Episode 71

full
Published on:

21st Jun 2023

Generative AI

It seems AI has exploded into the world. With so many tools available now to generate anything from text, images, videos, voice, music, etc., it's quickly becoming the fastest-adopted technology in the world. What these tools do is generate something new based on a prompt provided by the user. And they do this generation after getting trained on vast amounts of training data using thousands of GPUs.

In this week's talk, Amit and Rinat talk about Generative AI, is it really game-changing, the ethical sides of it and a lot more!

Transcript
Rinat Malik:

Hi, everyone. Welcome to Tech Talk, a podcast where Amit and I talk about all things tech and its implication on our society on our lives and everything else. Thank you again for tuning in to listen to this week's episode. This week we're going to talk about something very popular nowadays. It is generative AI. We have come across I'm sure you have come across the name Chat GPT, but there are some also rival ones as well. Competitors and all of these new AI solutions can be put into one category, which is called generative AI now we're going to talk about generative AI and where this fits into the whole spectrum of different AIs that are available and that we think might be available in future. So it is a very fascinating topic. We're very excited to talk about it. We're passionate about AI in general anyway. And we hope you're also waiting for us to talk about it because it is one of the hot topics nowadays everywhere.

Amit Sarkar:

Yeah, actually. Thanks a lot for the introduction. I think it sums up very nicely about the word generative AI, and I think AI is now a very hot topic because most of the companies they are now looking at how can they bring AI into their products. So a lot of companies are already looking at it. Plus, I think with the advent of chat GPT it has matured to a level where companies can actually make some benefit or add value to their existing project using the tools earlier they were not able to but now they are and one of the reasons I wanted to talk about this today was recently we heard a lot we are having actually the London Tech Week, and yesterday was the first day and then in that day, Rishi Sunak gave the opening. I mean, he talked about AI and basically they're talking about the regulation of AI tech, or the technology behind AI, how to regulate it, what are the things that you need to consider etcetera, etcetera. And it talks about a broad set of AI. But I think the reason I selected or I wanted to talk about generative AI is because it covers a broad set of tools, not just Chat GPT but across the board.

Rinat Malik:

Yes, absolutely. And there are obviously mixed views on how AI research and innovation should be controlled or whether it shouldn't be controlled at all. But that probably is a topic for another day. Generative AI is actually taken up you know the not just the tech industry but you know, everyone by storm because it just completely changed the game of AI upside down what it what AI systems could do before and what it can do now is has changed drastically dramatically. Overnight really after the sort of innovation of chat GPT one of the leading generative AI, but I think before we start talking about generative AI, we should understand what different kinds of AI there are. There could be theoretically and where does generative AI fit into all of this. Now, one thing I want to sort of be clear with our audience for you know, for good message to be honest, because it's very generative AI the word or the term sounds very similar to the term general AI. And these are two completely different things. And they are one of them is completely in theory. Right now. It's not you know, invented or innovated yet and the other one we've started to use it and grown to like it very much. So chat GPT and similar tools like “Bard” by Google, etc. And then there is an open source one available on the market as well. These are called generative AI. Now, Amit might shared a little bit more light on what the origin of the name is, but to me, it is the name generative comes because it generates output, which are sort of new, which are not necessarily similar to; or a copy paste are just a sort of permutation combination of existing texts but new texts it generates that but and I want to distinguish this with general AI general AI is not something we have right now. And, you know, it will potentially take many, many years to get true general AI. Now what is general AI is, General AI is general artificial intelligence, but general intelligence or you know, biological ones are the ones that intelligence that we humans have. We can have a we have a general purpose AI machine, which is our brain we, you know, absorb information. We have a consciousness we understand what we are sort of absorbing the information and we process it based on past experiences and we can come up with completely new solution in an unknown environment. By and one of the key thing to understand is that we understand the problem we understand the motivation behind coming up with a solution and what solution may or may not work in an environment that we haven't even been trained on. So that's general intelligence, and the day we sort of make it innovated. So it's artificially created that would be generally general artificial intelligence that would be massively powerful and very scary, because that would very quickly surpass human intelligence, because of all the all the knowledge available everywhere. But what we have is what we have now is still very impressive, generative AI like “Chat GPT” and “Bard” are sort of coming up with really well written scripts and sort of paragraphs, articles, etc. But one thing to remember is that it doesn't understand what it's generating that understanding part is general intelligence, and what is generating a generative AI is generating very well output, but it doesn't have that understanding of what it's generating. So that's something I just wanted to sort of clear at the beginning of this talk, hopefully, the audience will understand the difference going forward and not be scared of, however scary the response of Chat GPT is. It's not an intentional, sort of scary output.

Amit Sarkar:

Well, yes, I think you made the distinction very clear. I think yet, generally, intelligence or general artificial intelligence is still very far ahead in the future. We still haven't reached that stage. And it's basically problem solving and problem solving of, of problem solving of problems that it has never encountered before. So that's what humans do, right? We find a new problem and we try to solve it. By whatever we have learned and applying. But the current AI models what they do is they train on a specific set of data. And that data is what they uses a baseline to generate new data. Now you said that they are not able to understand what they are generating. Well, maybe they themselves are not understanding but they assign certain values of the probabilities in which that data is being generated. So mathematically, they can still say that okay, it is very close to the data that they have been trained on and what is being asked. So when we talk about generative AI, a lot of people they think about only Chat GPT Google Bard. So Chat GPT is from open AI and Bard is from Google. So these are the two top tools that you can now currently think of, but there are other tools as well. And this is these are tools which are related to text generation. So there are tools which are related to image generation like Dall E, and then you have the Mid Journey. Then there is tools for audio generation. So you can generate audio based on say vocal cord training. So you record a piece of audio from your own voice, or maybe using a synthetic voice. And you can give them a script or using your own voice. You can generate new audio. So that's generative audio. And then you also have video. So Runway ML they have Gen 2 I think model where you can write a piece of text and it'll generate a video for you. And you can also ask the video to copy a format of a recorded video that you have, and then using that as a baseline. Create a new form of video, like using that as a baseline. So these are the different forms of generative AI and as Rinat mentioned, generation because it's creating new stuff. So that's why generative AI because it's generating new stuff, but how are they generating and what are the mathematical principles behind it and what is the model called on which they are trained?

Amit Sarkar:

So one of the most famous models on which these applications are working on is called “GAN”. Generative Adversarial Networks. There are two neural networks and they are competing against each other. So you have a set of data. That's the real data. And one network one neural network is trying to create something which is as close to the real data as possible. And the other neural network is trying to figure out what is real and what is fake. So what is the real data and what is the data that is generated by the AI? If it can't make a difference, then the neural network that has created the data, the fake data, new data, that it will get a higher rating or a value, but if it is able to detect that okay, this is fake, this is generated by AI, then it will get a lower rating and it will get filtered out. So that's how you train a model. So the model itself is now dependent on the data on which you train. So the data could be a very good set of data which neatly labelled everything is said clearly high quality data with proper descriptions etc. Or it could be something which is I mean, it just images or it's just random text, without any labelling without any categorization, etc. So it's a low quality data so even if you're training the model on that piece of data, you will not get a very good output because you're not categorising it or labelling it properly. And contextualise it. And the last bet is that I mean these models once they generate the data, then I mean, it can also have bias because of the inherent bias in the data that we are picking. So suppose we picking data only of white male or white population, then it's then it won't understand how to generate maybe a black population or an Asian population, because it has not got access to that kind of data. Or it's very specific to certain categories of say music, so it won't be able to generate some new form of music. So I mean, I mean, I'm just giving some random examples, but I hope you get the point. So bias is there. And then the quality of the data. So that matters. So you have a model, and then what the model is using and what is the quality of the data that it's using, and if the data have any bias in it so these are quite important whenever we talk about generative AI, so that's why it's very important, what prompts we are writing for the model. So some of the models they are optimised for a certain type of prompts. Like you have to give a verb we have to give a noun you have to define the scene and you have to tell it how it should work, etcetera, etcetera, then the model will give you a very accurate output very close to what you want. So, you need to be able to describe it very well. And that's where we did a talk on prompt engineering, prompt engineers. So that's where these prompt engineers come in, because they can design a prompt for that specific model. So each model will have a different way of writing a prompt. Mid journey has different, Chat GPT has different, Bard different. So based on how you write the prompt is how you will get an output. So yeah, so it's quite a fascinating thing. And we just wanted to talk about it today.

::

Yes, yes, absolutely. And how you could you have different ways of manipulating different AI system based on the way you prompt them? So that is a world of AI is just so interesting, and it's getting more and more complex yet also more and more interesting at the same time and it is while we, you know we're not at an immediate risk of AI taking over. But I think it's still a very sort of very interesting space to keep in touch always and you know, sort of closely monitored the developments. And, yeah, I mean, you know, you were mentioning about London tech week. And, you know, people you know some of the leaders of the world are concerned about how the AI is being developed and yeah, I mean, in terms of whether AI systems that currently exists are taking, taking away people's jobs. That's a different question, but then whether AI will take over the world. These are two different problems. And I don't think there is a lot of concern in either off either one, you know, anyway, in terms of AI taking away people's jobs, that's just you know, sort of another way of saying whether automation will take over people's jobs that we were asking two years ago, and then you know, whether Microsoft Excel will take away people's jobs that we were asking 20 years ago, and when then whether calculators will take people's jobs we were asking sixty years ago. So that's just the same question that will keep coming and, you know, hopefully people will sort of re-skill themselves so, you know, they can sort of move on to transition them into more skilled and more interesting jobs. In terms of why the weather AI will take over the world, that is, that will be an issue if general AI was invented. And was not controlled before it was invented. Given that it you know, has a lot of traction in terms of AI safety. I think even when we are very close to inventing it, we will obviously take on enough safeguards. But again, you know, we are talking about something that will be able to surpass human knowledge, all historic human knowledge in you know, maybe hours or less than a day so, it is still dangerous, but we're far far away from it. And I come I keep coming back to this thing about sort of differentiating general AI with generative AI and the understanding part. And there is an interesting sort of metaphor I want to sort of, you know, put forward in front of our audience to think about this, there is like a philosophical sort of thought exercised called the Chinese room. And you know, this is just what it's called No offence to the Chinese people, I guess you could sort of look at it as, I don't know, like a English room for native Chinese speakers, but basically the thought experiment goes like this. So imagine, you know, a person who does not understand the word or a letter of Chinese and he or she is sitting inside a room, and inside that room is a like a unlimited amount of papers or in our now modern case, modern environment. Inside the room. There is a computer which has the biggest database of responses. So the person doesn't know Chinese doesn't understand anything to do with it. But there is a computer where he can input something, and the computer will give him the right response. Now you can communicate with this person by slipping a note under the door. So you can't directly talk to this person or in any other way. You can just you know, or a Chinese person or you know, basically whoever knows Chinese can sort of talked to them by slipping, writing down Chinese through under the door and he or she will get that piece of paper and then ask the computer, what is the response? What should be the response and the computer will generate or print out a piece of paper and then they can reply by you know, sending the that new piece of paper back. Now, would you say that that in and this computer has all the responses of all anything, anyone can write ever and has the correct appropriately Chinese response?

Rinat Malik:

Now, if you communicate with this person, you know, many many times or ask any questions you will get the right response as if, you know, a native Chinese speaker is talking. Now, would you say that person, you know, knows Chinese? Because, you know, in real life, he doesn't, but in the philosophical thought experience some experiments, someone could argue that the whole system which includes the man and the computer and the room, if you include the whole room as one system, that system knows Chinese, but the person itself doesn't know Chinese and that's, I think, quiet clear to us because, you know, he's just getting all the response blindly. So it's not exactly similar what's happening with Chat GPT but it just been trained on what should be the right response to you know, anything. And it's been trained on such a massive amount of data that it could sort of sort of respond in a way that it's sort of created a thoughtful response, but it's at the end of the day, it is a sort of an extended version of the Chinese room. It's about that's another so going back to the original point of whether general intelligence or whether AI will take over or that concern of leaders of the world that they have. I don't think general AI is, you know, we're anywhere close to creating general AI, but it could happen within our lifetimes. And it is something to watch out and be careful and take appropriate safeguards for.

Amit Sarkar:

Yeah, I think it was a very interesting experiment I take. I can relate it to Generative AI in a way, not like the way you're saying about training the model, but thinking about like, what I can do as a creator. So suppose I don't know how to paint and now I can ask AI to generate a paint/a painting in a particular style of a particular thing. Right? And there is an artist who can do the same thing. But that artist has years of training. So it's something similar, right? I know a language and I can talk in it fluently. And I don't know a language yet, I can still create something in that particular language using an AI tool. So that's the power of generative AI. So whom would you give more value to the artist who is trained many years, or the person who's just written a few pieces of text to create an art using AI? So that's the debate that people are now having, like, should I should we actually give awards to photographs or images that have been created by AI? Or videos that have been created by AI? and that's a very pertinent question that I think a lot of people are now thinking about a lot of awards have already thought. I mean, some people are discriminating. Some people are saying that you can't use any AI tools and some people are embracing it and they say, that's a new way because then you can bring on more ideas. Because imagine a person like me or you, we are not artists. I mean, I'm not an artist. in terms of paint, oh, maybe you are a musician or some creative artists. I'm not. So let's say we both are not painters, but now we have access to a tool that can help us paint to be in a particular format. So now we can create the ideas in our head earlier, we had ideas but we couldn’t create. And now with these tools we can create. That's the power of these tools, generative AI and that's what I see that now what's happening is there is a debate in the creative industry as to should we actually allow these tools or should we not allow these tools? How much scope should be given? Should we put a disclaimer that okay, this, this art has been generated using AI, or this particular video or audio has been generated using AI? I think the disclaimer is very important, because then you make sure that misinformation is not spread. So you don't give you credit to artificially generated image or text to a person because you know that it's created by AI. And then you can also use that disclaimer to make sure that you don't spread misinformation, or President Trump has said this, but actually, President Trump has not said anything. And we talked about in one of our topics of deep fake like, if Donald Trump or say Barack Obama, say something very controversial against Russia, or China, then that can be misinterpreted. And it can be I mean, it can lead to a war, but actually they have never said that it just someone created a deep fake video using AI, using their faces, and just generating the audio in their voices. So it's crazy to think about the applications and the implications behind these things. So that's one thing that I take from that Chinese thought experiment that yeah, it's interesting that a person who doesn't know how to speak Chinese, for someone outside the system, they think, yeah, that person knows Chinese, even though they don't know anything about it.

Rinat Malik:

Yes, yes, absolutely. And there are so many ways to think about it and you know, that what you just said, you know, I mean in my head, I'm categorising them in two different things. One is the sort of the ethically right and wrong application of this powerful tool. You know, you can do deep fake you can also scam and I mean, recently I've heard of quite, quite articulated and very real scams,

Amit Sarkar:

it's called voice calls which is generated by AI.

Rinat Malik:

And not just voice calls and, you know, some of them were actual video calls because of generative AI as you mentioned, you know, there are tools AI tools, which can also create videos based on existing videos. So, you know, just from our, this talk, someone could have enough information to generate anything said by us, which actually are not generally set by us. So yeah, so those are the category where, you know, application, you know, the ethically wrong application. Applying it applying this tool, wrongly.

Rinat Malik:

So that's something absolutely is a risk and something that we should be careful of. And nowadays we should, you know, with the advent of all of these new tools, we should be more and more careful of how we're being manipulated or we couldn't be in this in the space. And the other thing that you mentioned, which in my head is a different thing, is that about the sort of transparency and the sort of the human contribution of creating a piece of art or anything, you know, if we're creating any content, and we're putting it out there for the world to see, just like this podcast, you know, how important is it that it's, you know, sort of declared that an AI was sort of there to create it. Now, as you mentioned, you know, a lot of people have ideas, but they didn't have this that capacity to actually, you know, make it into a reality. Now have it you know, not at what point of human augmentation does it become not humans contribution anymore? Because as I mean, you know, I'm not a musician, I, you know, but I could think of some melody in my head. And then if I give that melody to someone, forget AI, you know, if I just, you know, if I hire a musician, and if I say, Oh, this is the melody and this is the type of, you know, some of the words I've written, make it into a song. And then, you know, I would essentially be the owner of that IP because it was my idea I just, now instead of a human, I'm giving that idea and that minimal amount of information and direction to an AI tool, and it makes it into a song. And the AI tool is using all of its knowledge that it was trained on, just like the human I'll be hiring, they would be using their years of experience on this industry. And generate this. So now, you know, the person who generated and, you know, is not as unrelated to owning because it was my idea, and I sort of made it into a reality by hiring someone and then you could also think about AI as you know, it's basically increasing the augmenting human capacity. So now, I want to sort of build a DIY project and Ikea furniture. I could have a screwdriver and manual screwdriver or I could have a power tool. So the tool I've used allows me to make this DIY project/make this table or this cupboard faster, and a little bit more professional because you know if I was doing it by manually with a screwdriver, not only would it be slower, but it would also be loose and you know, a little bit wonky, maybe in places, but with a power tool and knowing appropriately how to use it. I can have a much more professional output very quickly.

Rinat Malik:

Now, isn't that what AI tools are doing? I have the idea. And I'm just using a different tool which is more powerful and more advanced. Now at what point does it become not my contribution and the creation of the AI? And if I had given a lot of thought on what prompt I would need to give, and not just a lot of thought but many many attempts to get out sort of an output from the system. That is, you know, like, good enough to be on the market. Then do I not have a lot of sort of intellectual input in that in that sort of article or piece of art, whatever it is? and then who is to say that, who had more contributions in creating that? So yes, absolutely. I mean, for consumers perspective, I would like to know whether, you know, whether it was sort of augmented by AI, but then you know, I could also the question that do I want to know which tools were used in all of everything that we consume? I mean, you know, they're composing music composing software's like, FL Studio is one of them. Ableton is another one. There are a few. Every time we hear music, do we want to hear that? Oh, this software was used to compose these music. I mean, that would be quite tedious. So there is an open argument would like to hear all of your thoughts. Audience please do reach out to let us know. And we could have a more lively debate if you like.

Amit Sarkar:

Yeah, I have. I have a few thoughts on that. So firstly, I think we need to understand what technology is and what tools are. And where the humans are pleased with respect to it. So I think we all think that technology is there and it is what is enabling everything. But I think we all forget the human aspect of everything. You can have a screwdriver, but the screwdriver itself is not quick to screw anything. You need a human to use a screwdriver, even if it's a power tool, so the power tool itself will not do anything. Humans have to come interact with the tool to give you that output of okay creating a furniture. So the tool itself can do anything the human itself can do anything combining together they can achieve greatness. So something similar is happening with the generative AI thing. So just having Chat GPT and access to Chat GPT doesn't mean you will create an ebook or doesn't mean you will create a piece of art that will get awarded. You have access to it. But that doesn't mean that something will automatically generate from it. That general intelligence maybe, but we're not there yet. So that human prompt is still necessary to direct the AI model to generate that specific art that you're looking for. So that idea is still coming from you. So the IP should still be owed by you. Just like the furniture was built by you not by the screwdriver or not by the power. It was still built by you. And without the human. The tool is meaningless. But without the tool, you can still figure out a way to assemble the furniture with your own hands. So similarly, without the tool, you can still find a way to create a music or to generate or create a painting or write a book, but the tool makes it better, faster, easier. So you save a lot of time you don't get tired, you can do other things in the same amount of time, etc, etc. So the same thing is now happening with the AI tool. Plus I think another thing a lot of people keep forgetting is that you have different versions of AI. So like in Chat GPT you had GPT-3, that was the basis for Chat GPT and with after GPT-3, GPT-4 came, and that was the basis for the new Chat GPT. So Chat GPT Plus subscribers have access to GPT-4. GPT-4 was trained on a very vast amount of data much bigger than what GPT-3 was trained. And in order to train on these data, you need vast amounts of computing. So someone has to pay for that compute, somebody has to train those models. So we will reach a certain limit where it will not be very lucrative option to train very large models because they will require a large amount of computing power. So we will have to figure out better models where we can generate the same high quality output using less computing power. So I think that's also we have to cater to because I mean, generating something. Yes, everyone can generate it doesn't come free. You have to pay for it. So what is your idea that goes behind generating. It's like you have a pen. But Can everyone write a book? Or can everyone write a good story? Everyone has access to a paintbrush, but can everyone create a piece of art from that paintbrush? So it's just a tool so let's not forget that technology doesn't solve or tools to solve problems. Humans solve problems using those tools.

Amit Sarkar:

I think that's very important. And a lot of times, we give so much importance to the technology that we forget. And it goes back to the same thing which you said earlier about automation, like calculators. will replace Excel will replace automation will replace etc, etc. So I'm in software testing and there has been talk that manual testing by hand by using your brain is outdated. It is no longer going to be acceptable and it is going to get outdated very soon. Because automation is there. 10 years down the line, 16 years down the line automation is there manual testers are still there. So the job itself has gone. It has just adapted. Automation has just made us testers to do to be better at our jobs because all the repetitive stuff has now been automated so we don't have to repeat that stuff. The mundane stuff has been automated. That's the same thing with the AI tools. So the mundane stuff will be so I just saw TikTok video, Instagram Reel. And in that creator, a YouTube creator is describing how it used to take so many people to create one shot. Like you need designer you need a choreographer you need a costume designer. You need a producer you need so many things. But now we just a simple prompt. You can create everything and you can get the final output without actually needing a designer without needing a creative artists or anything and you can create that short and once you get the visual short with the lighting also in place, then you can use that to actually shoot that in reality. So with just a simple line of text, you can now replace so many jobs, but it doesn't mean that those jobs are replaced. It just means that now you don't have to spend so much time in ideation. The ideas have come and now we can talk about it and do something. I think we need to put a disclaimer. So for this podcast, I actually asked Chat GBP and Google Bard some ideas on what to talk about when we are talking about generative AI and I give the prompt like I co-host a podcast or technology. Today we are going to talk about generative AI. Can you suggest some ideas for this podcast? And it gave us a list of ideas and we are bringing some of the ideas. Of course. I was the one who picked up all the ideas Rinat has added his own stuff. So that doesn't mean that this chat/ this podcast is now we can give credit to Chat GPT. We are using our own head and but we are just using that for ideation.

Rinat Malik:

Yeah, absolutely. And the article both of them generated was really nice, but it was in short TLDR for me, so I just it was too long for me to read and I just didn't and I had a lot of things to say to you guys, which, you know, didn’t need direction. But then I can't sort of deny how well structured it was. I looked at that. It had bullet points and it actually had points of all the things that everyone is currently talking about. And it's actually it was actually really well written piece of sort of article and anyone following that would also be greatly benefited because you know, it has all the important points that people are talking about. Although, you know, our conversation has been fully natural, but we have seen what those articles we know what those headings were. And that has potentially made us even more sort of confident in what we're talking about. So yeah, absolutely. You know, it is a tool which can augment your performance. And definitely, people should use it. And find newer ways to innovate more things and newer ways to create more jobs, which doesn't exist right now.

Amit Sarkar:

I think one of the other things that I wanted to touch upon with generative AI was also the implications of data security and privacy. So a lot of times I think we write these prompts. And we think that okay, we can put input anything and it'll give us an output. But you have to remember that all those prompts are being stored in some servers. Suppose you say that I want to, or you take a court case, and you want to summarise the whole court case. So that court case may not be a public document. It could be a confidential document, and now you've put it as a prompt in a tool that is hosted on the cloud. So all the confidential information is now hosted on a server, even though it's not public, but it's hosted on a cloud server. So your data confidential data has now been leaked to confidential server. So you have to be very careful of what you're typing into these prompts. And you have to make it as abstract as possible without giving any personal information or any information that is confidential, or if it's used can be used for malicious purposes. So you have to be very careful of what you're inputting. Because everything that you input, the model is getting trained on that particular prompt and the response that it's getting, because there are humans who are evaluating the responses in Google Bard you can like or dislike a particular response, and that will train the model to say that, okay, this was the prompt. This was my response, not liked, or this was the prompt. This was the response liked. So now it's training the model. So if you give it a lot of personal data, it will train the model on that personal data, which may be useful, but it's not something that you should be doing, because then that's data. I mean, your data is now getting leaked to other external sources.

Rinat Malik:

Yes, absolutely. And something like this did actually happen? And I think someone accidentally exposed war secrets of a countries defence systems, I don't know remember whether it was the Sudan is?

Amit Sarkar:

not war secret. The person used to Chat GPT or some tool to for/some references. So they just blindly took the output from Chat GPT, and they used it as references. For… sorry as precedents for a court case that they were fighting. And those precedents were all randomly generated, they were all fake. So also, I think, bear in mind that whatever Chat GPT or any of the tools that they are generating is not factually correct. Because what they're trying to do is predict the next word or predict the next thing,Okay, based on what it has already generated. So suppose it's generated the AI. So it will try to predict what comes next. What is the probability based on the prompt that you have given? The AI is powerful, the AI is generative, the AI is general, the AI is non-ethical, etc, etc. So based on the prompt, It tries to predict the next word, it's not factually correct.

Rinat Malik:

Exactly. And there is a very quick and interesting way you can test it yourself with Chat GPT or BARD, so say for example, if you ask it, what is two plus two, it will always come back with four. Hopefully most of it not, I guess 99% of the time because it has it knows it has seen two plus two equals four, like so many times within its training data. But if you ask it some two random large numbers of say 746 times 982, What it will generate an answer, and it will very likely be wrong because it is very unlikely that within it's all of these training data, this particular two numbers and what is the multiply of these two numbers was there? So it will take two quick seconds ask this you know what is five plus five it should bring up 10 very easily because there are many data that shows that but two random large numbers multiply or add or whatever. It's unlikely that it has seen that piece of text before. So it's not understanding how mathematics works or how multiplication or even addition works. It just repeating what it's been trained on. And there was there is usually you know, a lot of training data which are in a smaller numbered mathematical calculations, but if you go large, as soon as you do you realise how quickly it breaks down. And that's a good example of understanding how Chat GPT or any other these AI generative AI systems work.

Amit Sarkar:

I think that's a very good example. And I think, with GPT-4 and maybe other tools, they are out training it on mathematics as well. But I think with GPT-3, what Chat GPT was initially using at the time of launch. Yes, it could not predict the correct mathematical answer. And it also can't predict the it can't. It doesn't have exact information. So it only has information that it has been trained on. So you have to be very careful. So everything you have to take it with a pinch of salt. So even if it says something you have to think that it actually make sense, or can you verify what it's trying to say? Until/unless you can verify or it doesn't make sense to you, you should not be using it or claiming that okay. It's true. So, yeah, so I mean, I mean, those are the some of the things that you have to be very careful about whenever you're using any tool. So just because you have a power drill, or a power tool in your hand, doesn't mean that you can drill or do anything with it. It has a battery. You have to be careful here to keep on charging the battery. If you don't charge the battery, you can't use it and even if it's a power tool, it doesn't mean you can poke it inside a human being. It is meant for specific objects only and under specific circumstances. So you have to be very careful of what tool you're using and for what purpose.

Rinat Malik:

Absolutely. Absolutely. This has been a really nice conversation Amit, I've thoroughly enjoyed it and hopefully our audience also enjoyed listening to it. We do really urge you guys to please reach out with any comments, feedback or any new topics or if you want to debate on anything that we've said in our this episode or any of the past ones, please do reach out. Our contact details are in all of our platforms, which you are listening to. So we look forward to hearing from you guys. Thanks again for tuning in with us this time, and hopefully we will see you again next week.

Amit Sarkar:

Thank you so much, and hope you had a good time listening to our chat on this topic. Thanks again for tuning in. See you next week. Bye

Rinat Malik:

Bye

Show artwork for Tech Talk with Amit & Rinat

About the Podcast

Tech Talk with Amit & Rinat
Talks about technical topics for non-technical people
The world of technology is fascinating! But it's not accessible to a lot of people.

In this podcast, Amit Sarkar & Rinat Malik talk about the various technologies, their features, practical applications and a lot more.

Please follow us to hear about a popular or upcoming technology every week.

#Tech #Technology #Podcast

Find us at
Amit Sarkar - https://linktr.ee/amit.sarkar007
Rinat Malik - https://linktr.ee/rinat.malik

Contact us at - https://forms.gle/AauF6eic2CQv2Lvn9

Review us at - https://www.podchaser.com/podcasts/tech-talk-with-amit-rinat-1556283

About your hosts

Amit Sarkar

Profile picture for Amit Sarkar
Amit Sarkar is an experienced software professional with over 15 years of industry experience in technology and consulting across telecom, security, transportation, executive search, digital media, customs, government, and retail sectors. He loves open-source
technologies and is a keen user.

Passionate about systems thinking and helping others in learning technology. He believes in learning concepts over tools and collaborating with people over managing them.

In his free time, he co-hosts this podcast on technology, writes a weekly newsletter and learns about various aspects of software testing.

Rinat Malik

Profile picture for Rinat Malik
Rinat Malik has been in the automation and digital transformation industry for most of his career.

Starting as a mechanical engineer, he quickly found his true passion in automation and implementation of most advanced technologies into places where they can be utilized the most. He started with automating engineering design processes and moved onto Robotic Process Automation and Artificial Intelligence.

He has implemented digital transformation through robotics in various global organisations. His experience is built by working at some of the demanding industries – starting with Finance industry and moving onto Human Resources, Legal sector, Government sector, Energy sector and Automotive sector. He is a seasoned professional in Robotic Process Automation along with a vested interest in Artificial Intelligence, Machine Learning and use of Big Data.

He is also an author of a published book titled “Guide to Building a Scalable RPA CoE”