Personal Note
Dear community,
After a hiatus of just over a month, I am returning to my writing, albeit a bit later than planned. I've found myself more involved in supporting my wife with our two children than we had anticipated. Navigating the challenges of caring for two children simultaneously is quite a different ball game compared to managing just one. 😊
Nonetheless, here we are, with my first paid article completed. In my paid articles, I plan to adopt a more personal tone, given the smaller community size here, and I will strive to respond to comments as best as I can, despite the demands of my rigorous schedule (family, academics, BMA, business, and leisure).
I aim to publish one paid article per week, but due to my aforementioned busy schedule, I cannot always guarantee this. Additionally, I am keen to explore topics that the community demands, provided I feel sufficiently knowledgeable in those areas. While I intend to keep my usual topics (geopolitics, conflicts, and macroeconomics impacting geopolitics) out of the paid section, it may not always be possible to make a clear distinction, I suspect.
Furthermore, I'm excited to share that I've already started working on my next major geopolitical update, which is slated for release this week.
With that said, let's dive into the first article.
Introduction
Artificial Intelligence (AI) seems to be the talk of the town these days. But is it really ubiquitous? What exactly is AI? How does it function? And what are its implications? I aim to provide a high-level overview for you, attempting to describe AI in language that everyone can understand. Additionally, I will specifically focus on the SWOT (Strengths, Weaknesses, Opportunities, and Threats) dimensions. Why? Because I believe that many people, lacking specific understanding, fear AI, and I hope to alleviate some of these fears with this article.
But what qualifies me to write about AI? As some of you may know, since I've mentioned it several times, I have degrees in Engineering, Computer Science, and Management/Business Administration. Additionally, as you may be aware, I'm about to launch a new business venture very soon. It will be an IT and Management Consultancy, focusing on SAP and AI, in collaboration with a professor who specializes in AI at the university.
What is AI?
I do not plan to delve into the minutiae here, given the vast complexity of the subject matter, which makes it challenging to explain in a way that everyone can grasp. Instead, I'll attempt to explain a specific aspect of the "Machine Learning" approach. Of course, there are numerous AI models beyond Machine Learning, and even within ML, there are myriad approaches.
For example, let's consider ChatGPT, a prominent example. What does GPT stand for?
Generative: This is self-explanatory, I believe. It means that this model generates new content based on the input. Ideally, the content it generates is relevant and accurate. For instance, if you ask the model a question, you would expect a sensible and truthful answer.
Pretrained: Typically, the model knows what it is supposed to do. The algorithms are already in place. However, it doesn't possess the requisite knowledge to provide correct answers. It's akin to an empty vessel. The trick lies in feeding it data. For example, by allowing it to "crawl" the internet or specific web pages preferred by the developers. Essentially, the model reads all the text and "learns" through this process. This is referred to as "training." Thus, every new model on the market needs to be trained first, which often represents the costliest phase of developing a new model. We'll delve deeper into this later.
Transformer: This is where the "magic" happens. The transformer "transforms" the input stream into an output stream by applying a highly complex algorithm and the learned data.
How does it work?
I'll attempt to explain, albeit superficially, how such an algorithm function. Keep in mind that there are numerous algorithms out there, so my explanation will offer a bird's-eye view, focusing on deep neural networks, which are employed by ChatGPT, among others.
As previously mentioned, one must expose the model to training data. This could encompass corporate documents or, on a broader scale, the internet. Models can be tailored to specific corporate data or trained on general knowledge available online. Hence, I won't delve into technical specifics here. Several training approaches can be employed either separately or concurrently:
Training Data Labeled by Humans: Initially, humans must review the data collected by the model during its "crawling" phase. They determine which model-generated conclusions based on the learned data are accurate. By labeling these conclusions as true or false, the model adjusts its parameters and probabilities, improving its accuracy with each iteration.
Knowledge Generated by Probabilities: It's not always feasible to train your model through human labeling. In such cases, a vast amount of freely available data is consumed, with the model's probability assessments being cross-referenced against previously learned data. When the model reaches a satisfactory level of accuracy, it can be released to the public, which then assumes the role of labeling through mechanisms such as the "dislike" button in ChatGPT. This feedback loop enables the model to learn from user interactions, adjusting its internal probability tables accordingly.
In essence, the model constructs text (or images) incrementally, continually evaluating the entire text to determine the most probable next word or symbol based on its training data.
I hope this wasn't overly detailed.
SWOT-Analysis
Strengths:
AI enables the automation of routine tasks that adhere to recognizable patterns expressible through probabilities. For instance, if your business receives electronic inquiries that require decision-making—whether to accept them and under what conditions—you have several options:
A salesperson manually reviews each inquiry, making decisions based on either predefined rules or personal judgment. If decision criteria are straightforward, conventional software applications with hardcoded algorithms could replace this manual process.
Alternatively, you could analyze historical data (inquiries, orders, decisions), conduct statistical analyses to identify the most probable outcomes, and codify these findings into an algorithm for a traditional program. This approach, though complex and rigid, could also substitute human judgment.
Employing an AI model to learn from past data and decisions, followed by a period of human training (labeling) to fine-tune the model, offers a more flexible and adaptable solution than hardcoded software. Regular human oversight is necessary to ensure the model continues to learn and adapt to new situations.
In societies facing labor shortages due to low birth rates, AI replaces less demanding jobs, thereby sustaining industrial production.
AI facilitates groundbreaking scientific research, potentially accelerating discoveries in fields such as energy storage through the identification of new synthetic chemical elements or advancements in medicine.
Weaknesses:
The development and training phases of AI models are costly.
Public apprehension towards AI, stemming from a lack of familiarity, may hinder its widespread acceptance.
AI models operate within constraints set by their developers, leading to inherent biases. While open-source models may offer greater impartiality, they also pose risks of misuse.
AI's reliance on substantial computing resources, including specialized hardware and energy for data processing and cooling, presents significant logistical and environmental challenges.
Threats:
Note by the author: The following subsection “Threats” is how I originally wrote it with all of my grammatical mistakes etc. Why? I wanted AI (You can guess which one…) edit my text. And it did well except of this section… No matter how hard I tried, it mitigated everything I wrote about the threats posed by AI… Hahaha 😊 End-of-note.
I think this section is the most feared one. I’m not going to doom here, even though I certainly could. 😊 Still, I will list several risks here. What is unusual in SWOT Analysis is that these threats are being “commented”. Still, I feel the need to comment the following explanations since the topic is so important. So, let’s start:
An Artificial General Intelligence (AGI) will be developed, which will take over the world, enslave or destroy humans and do all kinds of other evil stuff.
My comment: Following a huge chain of logical thoughts and assuming that both software and hardware would be able to host an AGI as well as maintaining a physical presence in the real world with some kind of robots or human surrogates, I think this would be a real threat. We humans are certainly not good to the earth and to other living beings both humans and plants. There certainly could be a logical conclusion that we need to “be gone” to make things “better” on earth.
The point is, for the time being there is no technical solution to realize such an “entity”. I assume, that the far more challenging part is the hardware part than the software part. Even though the software architecture is also still not even close capable of “thinking” dynamically.
We are (rightfully so) stuck in a variation, which is constantly developing, of the Neumann architecture. Even though we developed so much, that the initial architecture is barely recognizable, we are still on the technological foundation which was developed during and/or shortly after WW2. (As most of our today’s technology, BTW).
I explained in one of my articles the “Economies of Scale” effect. That’s why it makes sense if most people using this very efficient architecture of computing. Thinking one dimensionally, it is highly efficient. What it certainly is NOT is “flexible”. It is flexible as glass. It is literally hard wired. No system, which is hard wired will ever be able to host an “AGI”.
Of course, there are experiments and developments of all kinds of different architectures, such as quantum and DNA computers. And I think they will do a far better job. Still, I think they are not sufficient. In other words, before we can expect something even remotely resembling of an AGI, there are decades of hardware research ahead of us. Maybe only a few decades, because AI will help us with the developments, but still, decades.
What is the most important takeaway from this article is the following: A remotely “dangerous” AGI requires a perfect symbiosis of a new generation (not existent yet, not even the basis is discovered) of soft- and hardware.
All the talk of “AGI” or other dangerous stuff in the labs of OpenAI are only to scare people and drive stocks and expectations, nothing more.
So, no problems? Everything is safe? No, of course not. AI and its development will cause a lot of “disruption” in all senses of the word. It will destroy hundreds of millions of jobs and force us to adapt.
“It” could go rouge (without an evil AGI controlling it), by being driven from a data center, plugged into the internet and having no restrictions and cause a lot of trouble/disruption all over the world. (Without any evil intentions, just as a “malfunction”.).
It could and it certainly WILL be used by all kinds of criminals and even states to control/manipulate/exploit other people. In fact, it happens already.
It will change the way we think and what we need to know. Future generations will raise on a higher level. They will learn less knowledge and more methodology to manage knowledge to generate new value. Knowledge, tools, and supportive work will be delivered by AI self-evidently.
Comment: If managed wrongly by an educational system, it could lead, that the people simply become “dumb”.
When first “free” AI models will be available and accessible in the internet or dark-web then a lot of “knowledge” will be available to people which should never be in the hands of wrong people. Like, criminals, terrorists, and dissatisfied people sitting at home and having nothing to lose…
Opportunities:
AI represents a pivotal advancement, promising to enhance societal productivity and spur scientific innovation. The potential for AI to revolutionize fields such as medicine, material science, and physics heralds a new era of discovery.
By automating routine tasks, AI paves the way for higher-value employment opportunities, addressing workforce shortages in demographically challenged societies.
Enhanced access to education, particularly in underprivileged regions, stands to spread knowledge and foster global intellectual growth.
The focus on core competencies, facilitated by AI, will foster entrepreneurship and elevate macroeconomic productivity, provided governments and businesses embrace these technologies strategically.
AI offers a path to streamline bureaucracy, enhancing the efficiency of both governmental and private sectors, thereby attracting skilled labor.
Conclusion and Outlook:
Another note by the author:
The “Bias” hit again. This section was also not very good editable since the AI mitigated everything; I wrote 😊 BTW: I did this on purpose. I left Piquet outside this time to demonstrate some pros and cons of “some” AI products 😊 So, you can now “enjoy” my unedited conclusion. End-of-note.
Overall, I think we are experiencing the start of a new age. And I’m absolutely positive about it. It will indeed and without any question cause a lot of disruptions and problems. It will cause a lot of destruction. Old thinking patterns, architectural patterns, scientific tenants, types of jobs, engineering approaches and so on will be destroyed.
There is a very fitting word and theory for this: Creative destruction.
A lot of great new things will arise from this. As soon one jumps on that train as more one will be able to participate of the “fruits” of it.
Denying it and hating it is absolutely okay and fine. I understand that. But what one should absolutely NOT do is ignoring it. If you don’t know some basics and stay updated on these basics, no matter your age (I’m not talking about technical stuff), one could become one of the victims of the “creative destruction” process. That’s how it was ever since with new discoveries.
I will try to accompany you through this period with occasional articles about AI. You can always write in the comment section of my paid articles if you have any particular topic wishes and I try to find a way to write about it, if it fits the overall BMA picture and audiences’ interests.
Since I’m going to start a consultancy which will introduce AI solutions and train people with AI tools etc. you can always feel free to contact me if you need a personal consulting/advice for yourself or your company. We are developing new solutions or doing just basic training for your staff. We are focusing on Central Europe and are proficient in English, Serbian, and German, although we also work worldwide.
My email address is known: bmaaleks @ gmail.com (I won’t disclose my private/business mail here).
Let me know whether you liked that article and what topics you would like as well to see in the paid section. Thank you very much.
Yours,
Aleks
Couple of comments on the article:
1. By observing different varieties of AI, one can apply the moniker: garbage in, garbage out. This is especially true for uncovering "facts," processes are more manageable.
2. I will mis Piquet's top-notch editing. 😀
Congratulations on the birth of your child. I pray your wife and child are in great health. Welcome back.