The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Artificial Intelligence in education: Who should be the custodian of the new gold? – CNBCTV18
Posted: October 31, 2019 at 5:48 am
It is common knowledge now that a homogenous and rote learning education system might suit some children but not all, thereby isolating many and preventing them from achieving their full potential. What we need to offer our children is a fun experience while learning. An experience that is immersive, experiential, self-paced, interactive and designed specifically for each child.
Fortunately, the global community has awakened to this crisis in education and has called for quality and inclusive education for all, which is the crux of the United Nations Sustainable Development Goal (SDG) 4. I perceive personalised learning as the foremost means of answering this global call and truly achieving this goal for all children across the world.
Social and institutional implications
The focus here, however, is not on highlighting the merits of personalised learning but to determine if the concept is feasible; if yes, what are the social and institutional implications?
The definite answer is Yes and information technology is the key to making it happen. Specifically, I am referring to the internet and the growth of Artificial Intelligence (AI), which offers the possibility of harnessing the collective wisdom of the many for the benefits of the individual.
The internet can be seen as offering two levels of information. The first is the actual content that is made available for learners to access lets call this the first order information. The second level of information is partially hidden it is not readily available to everyone and relates to the behavioural aspects of the users accessing and using content; this is the data that is harnessed by AI. Lets call this second order information.
An article in the May 17, 2017, issue of The Economist considers this second order information about users accessing and using information on the internet as new gold. Ben Rossi, in his article, Data revolution: the gold rush of the 21st century, estimates that the amount of data accumulated in 2011 and 2013 was more than nine times the data collected till 2011; this data is expected to reach 44 Zettabytes by 2020.
Personalised BOT
Such information has immense utility when it comes to education. Imagine a scenario where a child in rural India is having trouble with introductory algebra and the teachers limited knowledge base makes it hard for the child to find an answer. We can expect the teacher to only have a finite set of approaches to teaching algebra that is available as she or he is constrained by the human brain. Things will be very different, however, if the child is given access to an online system, which I will tentatively call Global Intelligent Education Platform (GIEP). As part of this system, the child is paired with a personalised BOT right after she/he enters school.
ABotcan be described as a computer program (a set of algorithms) that is able to support and provide guidance to a user or users in accomplishing a task or automatic repetitive tasksand may growits own intelligence after mining and analysing huge amounts of data. A Bot is a product of AI!
This BOT develops a keen understanding of the childs attributes and learning preferences by evaluating data about the childs ongoing learning experiences. It also has access to an infinite set of possible interventions arising from learner-centric data derived from the experiences of millions of other children learning algebra or any other topic worldwide to help the child overcome learning problems. Personalised learning, the holy grail of education, is a definite reality in this hypothetical scenario. As I perceive it, making this a reality for children today a distinct possibility.
Before that, however, we must overcome some ideological challenges related to the ownership of information that the BOT will access. As we have visualised it, the personalised BOTs capacity to impart learning and customise solutions will only be as strong as the amount of information that it can access. Therefore, the strength of the GIEP will depend on whether the information generated by learners all across the world is made accessible to every individual learner a pure social good.
Role of inter-governmental organisations
Who, then, should be the custodians or managers of this new gold? In many ways, the knowledge available can be considered as the global commons as described by the late Nobel Laureate Elinor Ostrom.
Who then can provide and manage this common? Can governments provide this service? The answer is both Yes and No. Yes, because governments do have the mandate to provide the social good; No, because in this case, the commons transcend national boundaries. If I need to find an analogous, I will use the global climate system as the global commons and if not managed properly can lead to climate change.
The unambiguous solution to this dilemma is that the responsibility be taken up by an inter-governmental organisation such as the United Nations or one or more of its specialised agencies such as the United Nations Education Science and Cultural Organisation (UNESCO).
To answer the two fundamental questions posed in the title of this article, I would say that the global community own the global knowledge commons and that this knowledge be managed by an inter-governmental agency such as the United Nations.
Original post:
Artificial Intelligence in education: Who should be the custodian of the new gold? - CNBCTV18
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence in education: Who should be the custodian of the new gold? – CNBCTV18
Artificial intelligence outperforms clinicians on decision of where to send post-operative patients, pilot shows – News – McKnight’s Long Term Care…
Posted: at 5:48 am
Artificial intelligence (AI) won the battle between man versus machine after a pilot study found that the technology outperformed clinicians in triaging post-operative patients for intensive care.
AI was able to correctly triage 41 out of 50 patients (82% accuracy) during the study, while surgeons had an accuracy rate of 70% after correctly triaging 35 patients. The number of incorrect triage decisions was also the lowest for AI, which had an 18% rate. Surgeons had a 30% rate.
The findings could lead to more AI usage when trying to acquire a patients clinical information for determining if they need intensive or post-operative care.
The algorithm will be improved and perfected as the machine analyzes more patients, and testing at other sites will validate the AI model. Certainly, as shown in this study, the concept is valid and may be extrapolated to any hospital, said study co-author Marcovalerio Melis, MD.
Details from the pilot study were presented during the American College of Surgeons Clinical Congress 2019 this week.
More here:
Posted in Artificial Intelligence
Comments Off on Artificial intelligence outperforms clinicians on decision of where to send post-operative patients, pilot shows – News – McKnight’s Long Term Care…
The end of humanity: will artificial intelligence free us, enslave us or exterminate us? – The Times
Posted: at 5:48 am
The Berkeley professor Stuart Russell tells Danny Fortson why we are at a dangerous crossroads in our development of AI
The Sunday Times,October 27 2019, 12:01am
Stuart Russell has a rule. I wont do an interview until you agree not to put a Terminator on it, says the renowned British computer scientist, sitting in a spare room at his home in Berkeley, California. The media is very fond of putting a Terminator on anything to do with artificial intelligence.
The request is a tad ironic. Russell, after all, was the man behind Slaughterbots, a dystopian short film he released in 2017 with the Future of Life Institute. It depicts swarms of autonomous mini-drones small enough to fit in the palm of your hand and armed with a lethal explosive charge hunting down student protesters, congressmen, anyone really, and exploding in their faces. It wasnt exactly Arnold Schwarzenegger blowing people away but
Want to read more?
Subscribe now and get unlimited digital access on web and our smartphone and tablet apps, free for your first month.
Go here to see the original:
The end of humanity: will artificial intelligence free us, enslave us or exterminate us? - The Times
Posted in Artificial Intelligence
Comments Off on The end of humanity: will artificial intelligence free us, enslave us or exterminate us? – The Times
Chatbots and artificial intelligence influence in education | Opinion – Indiana Statesman
Posted: at 5:48 am
Chatbots can be used for several purposes, such as helping customers and answering complex FAQs.
They have even been used to help pick candidates in recruitment processes, so it is no surprise that the educational system is trying to implement chatbots.
The scopes of application could advance administration with the aim of facilitating procedures, as a date reminder, assistance inthe reinforcement of educational content and mentoring and accompaniment actions.
Properly trained with a huge quantity of data, a chatbot could ease both the educational process of the student and the tasks of the teacher.
This artificial assistant could respond to a 24/7 demand, allowing professors to take care of the most qualitative tasks.
There is still reluctance from students and teachers to interact with machines, but once chatbots demonstrate their efficiency and gain the confidence of both parties, we will perhaps see a boost in their use in the educational field.
Here are some applications of both chatbots and artificial intelligence within the educational area that could have an astounding impact on the whole industry:
Essay Scoring.
Feedback on individually written essays is a time-consuming job that many educators are grappling with, and the problem is even bigger in massive open online courses.
Because there are often more than 1000 students in one class, there is clearly no realistic way for written essays to be given individual feedback.
Innovators have flirted with the artificial intelligence (AI) industry to combat this problem, and a solution is close to hand.
Through feeding thousands of essays on a machine-learning algorithm, most people believe there is a good chance to replace human input with AI systems on essays.
Learning Through Chatbots.
Intelligent tutoring systems are a common application of artificial intelligence that provide students with a customized learning environment by analyzing their responses and how they go through the learning material.
Likewise, chatbots with artificial intelligence software can be used to teach students by making it look like a regular chat conversation by converting a lecture into a series of messages.
The bot will constantly determine the student's level of understanding and thus present the next section of the lecture.
Botsify is a chatbot for education that works in a similar way.
In the form of text, pictures, videos or a combination of these, Botsify introduces a specific topic to the students.
Students take quizzes after studying the subject and send the findings to their teachers. The teachers can easily monitor the grades of the students as well.
Enhance student engagement.
Students are familiar with instant messaging sites and social media nowadays.
Whether they want to chat, solve problems and find the best helper, they turn to or use a digital help desk.
This can be used to increase students' training and interest in a topic.
Teachers and students can use the messages to communicate with classrooms, offices, students and different activities.
Students would find it easy to learn about tasks, due dates or other important events.
CourseQ is a chatbot that is created by providing a simple way to talk to students, groups and teachers.
It can be used by a group to transmit messages and respond to queries from students.
Students may use it to ask class questions, and teachers may use it to interact with students, ask questions and address their concerns.
Better student support.
Here, chatbots can bring tremendous value.
More use can be made of the chatbots that support the students during the admission process by providing all the necessary information about their courses, modules and faculty.
The bots can also act as campus guides when students arrive on campus.
They will help the students learn more about scholarships, hostels, library membership, etc.
Efficient Teaching Assistants.
Students also create questions on the web and locate someone to help them accomplish the activities and overcome their concerns.
Moreover, new educators need help to ease their hectic schedules.
Bots are used as electronic teaching aids to perform teachers repetitive tasks.
Such bots are used to answer questions on the course module, classes, assignments and deadlines.
Instructors can also track the students learning progress. Chatbots can provide the students with direct reviews.
Lastly, chatbots will assess the educational needs of the students and prescribe learning material accordingly.
See the rest here:
Chatbots and artificial intelligence influence in education | Opinion - Indiana Statesman
Posted in Artificial Intelligence
Comments Off on Chatbots and artificial intelligence influence in education | Opinion – Indiana Statesman
SmartStream Introduce a New Artificial Intelligence Module to Capture Missed Payments and Receipts – Business Wire
Posted: at 5:48 am
LONDON--(BUSINESS WIRE)--SmartStream Technologies, the financial Transaction Lifecycle Management (TLM) solutions provider, today completed a proof of concept for an artificial intelligence (AI) and machine learning module within its existing TLM Cash and Liquidity Management solution for receipts and payments - essential for any business in terms of liquidity risk and regulatory reporting.
Technology that meets the market demand for forecasting liquidity has been the backbone of SmartStreams intraday liquidity management solution. The next phase of the solutions development is about predicting the settlement of cash-flows. SmartStream has been working on a proof of concept with its clients for profiling and predicted intraday settlement activity, which includes missed payments and receipts identification planned for settlement within current date. Cash management teams will gain greater visibility into the payment process and manage liquidity risk more efficiently, minimising the potential of payments being missed.
Andreas Burner, Chief Innovation Officer, SmartStream, states: This proof of concept is clearly another important step towards ensuring that our clients are keeping pace with what the regulators are demanding - and in particular the questioning of a banks position and the management of its outstanding balances. By combining our recent achievements in AI with SmartStreams many years of experience in this area, the Vienna-based Innovation Lab developed this new AI cash and liquidity prediction module. The technology continuously learns data patterns so the service continues to improve and become more efficient.
The new TLM Cash and Liquidity Management, AI and machine learning module is an important development for any financial institution with a treasury department, with its ability to predict when credit is going to arrive; giving the treasurer more control over cash-flows. The proprietary algorithm uses the data and predicts the forecasted settlement time of receipts on an intraday basis. The core of the module is underpinned by sophisticated machine learning technology that continuously improves, meaning the predictions become more accurate and treasurers can make more informed decisions.
Nadeem Shamim, Head of Cash & Liquidity Management, SmartStream, says: Things are going to get tighter in terms of managing liquidity. Collateral is expensive, capital is expensive and there is currently a big drive to reduce excessive use of capital this is an area where AI and predictive analytics can manage liquidity buffers more efficiently and that can result in significant savings.
AI and machine learning provide the banks with the opportunity to look at reducing the liquidity buffer. The rigorous analysis of unstructured data and learned settlement predictions reduces costs. It also offers another tool that can be used to mitigate the impact of reputational risk as it relates to the ability to meet payment obligations by allowing greater visibility into exposure limits with predicted forecasting. The new SmartStream user interface enables users to drill down into individual cash-flows.
Ends
View post:
Posted in Artificial Intelligence
Comments Off on SmartStream Introduce a New Artificial Intelligence Module to Capture Missed Payments and Receipts – Business Wire
Artificial intelligence expert: True artificial intelligence should also have a consciousness, but we are far from that – The Slovak Spectator
Posted: at 5:48 am
Artificial intelligence is an issue that has gained much popularity in the past few years.
This is also evident in the number of technologies referring to artificial intelligence (AI). Autonomous cars and personal assistants like Apples Siri are often spoken about, while machine learning, deep learning and neural networks are frequently featured in written text. What do these terms mean, and what is the difference between them? How far has technology based on elements of AI progressed? We discussed these topics in a series of interviews with Juraj Jnok, an expert on artificial intelligence of the ESET company.
If we can simulate human intelligence, consciousness and thinking with some technology, we achieve artificial intelligence. There is a term for it - artificial general intelligence - but there is also a concept called super intelligence. While artificial general intelligence (AGI) is meant to imitate human thinking, including its faults, super intelligence (SI) should go even further and exceed the limits of human consciousness and thinking, and considerably surpass them. However, there are more philosophical discourses involved, and we have to admit that currently, we are still far behind, even in the development of AGI.
These terms are frequently confused, even by professionals. Simply put, artificial intelligence is an umbrella notion. It includes a wide range of topics that also cover the issues of robotics, machine learning and so on. Thus, machine learning is just one sphere of AI, and currently, it is probably gaining the most attention. Deep learning, on the other hand, is just one part of machine learning. This sphere is inspired by how the brain functions and tries to simulate the connection between neurons in the brain.
The idea of machine learning is quite simple. We have a lot of data available, and through ML, we want to make a compact representation of it. This means that if I have a huge amount of data, I do not have to sort through it all on my own. It is enough for me to take a smaller sample, assort it and use an algorithm on it to in order to assign it the basic sorting/classification. Then, I let the learned algorithm work on another, smaller sample, and watch if it sorts it out according to my wish. If not, I adjust its behaviour, for example by specifying criteria. If I am satisfied with the algorithms performance, I use it for the whole database, and the algorithm sorts it on its own in a much shorter time than any human would manage.
For example, if we want to teach a computer how to distinguish a cup, we load thousands or millions of photos of cups and glasses. Of these pictures, the algorithm tries to create some sort of generalisation on its own. Then, when I show it a new photo of a cup, it will be able to tell what is the probability that this is a cup. If I am not content with the results, I can adjust the criteria, for example, by telling it the object is a cup and so on.
Currently, when AI is mentioned, it is machine learning that is talked about the most. It already functions on a regular basis by, for example, recommending users programmes on Netflix based on the programmes they have already seen. Mobile phones that categorise photographs, autonomous cars and cyber-security are also examples of machine learning we engage in. Right now, the biggest discussion in AI revolves around machine learning; global companies like Google and Apple are investing massively in these technologies.
Deep learning also interprets bulks of data, of which we need to make a compact representation. This is called a model, which will then make predictions. However, we will not use tree algorithms but rather neural networks.
Neural networks are inspired by how the human brain works, by the functioning of neurons. The brain is basically a huge network of neurons, which is entered through some inputs. These inputs are evaluated in the brain, and then the brain gives sends the outputs into our organism. The neural network works the same. We have some inputs that enter the network. The network assigns a certain significance to the entries, evaluates them, and then returns the outputs to us.
Let us try, for instance, to explain it through the example of a decision tree, which is a common classification algorithm in ML. In a decision tree, each decision takes me to another one, followed by another, similar to a tree growing. Either I climb to one branch, or to another, and then I face the next branch, the next layer. AI during machine learning works in a similar way: either this, or that etc., round and round.
By contrast, when it comes to neural networks, this impulse enters something we can imagine as a network and crosses it by passing several layers simultaneously, or can even return back. There are even neural networks with cells that decide on what I will use in this situation and what I will dump, but I will remember it and can use it later. So, it is closer to the real functioning of the brain, even though this is not an exact copy of how the brain works, of course.
Simply put, yes. This also implies a fundamental difference, which concerns the interpretability. In the decision tree, I can find out retroactively why the AI decided the way it did. I can look back at individual steps and evaluate in which step it decided in which way. With neural networks, this is not so simple, as the path of the impulse is not direct. The impulse is evaluated many times, has a certain weight allocated, and the algorithm creates a generalisation. But I cannot say why it allocated certain weights to individual neurons and determine why it decided the way it did. This is a big problem when applying these technologies on decisions involving humans, as we cannot say clearly why the due model has decided in a certain way.
Yes. For example, in the banking sector, AI is used to evaluate a clients creditworthiness. This is a very sensitive issue, in that people want to know why the bank has not approved their credit application. Hardly anyone wishes to hear that it was artificial intelligence that decided on this, and, moreover, we are unable to explain why.
Apart from this, there is also the issue of input data. AI can learn incorrect generalisations based on the data available, for example racism. Statistically, the input data may imply that there is higher probability of a specific group of the population not repaying a mortgage. An incorrect selection of data can lead to prejudice in the decision-making process of the resulting model. However, we try to prevent this in our work.
Technically, we could already apply machine learning and deep learning in banking, but this has not been done on a mass scale for the abovementioned reasons.
Yes, there are still some other ways. Many algorithms used today are old: some were established back in the 1950s, or even earlier. Basically, everything we draw from today goes back to the 1950s, 1960s, or 1970s. The last considerable innovations date back to the 1980s and 1990s. Since then, we essentially only improve what has already been invented. Or, we have found practical use for algorithms which were only on paper.
Paradoxically, the development of ML was aided by computer games as it has been used to increase computing performance, especially in graphic chips. Another factor was the arrival of technologies associated with big data and big-capacity, and fast repositories. Until then, there was also a problem with databases. The development of these two spheres, i.e. computing performance and databases, has led to the current situation: most companies focus exactly on Machine Learning, which inevitably needs their assistance. But there are definitely more ways.
Well, not just the games, but we owe them considerable credit in this. When we split machine learning into a complete mathematical extreme, it is basically composed of matrixes. Simply put, we need to calculate decimal numbers and multiply in huge quantities. This is what effectively happens on the computing level. And in this, computers are much better than humans. Exactly the same happens with graphic cards when describing the environment in which the games are played, or when watching an HD video. Basically, these are the same mathematical operations.
Of course, algorithms have been developing and improving, and new ones have even appeared. So, you cannot say that we have made zero progress. However, it is true that even the current models of AI, commonly used in todays products, were generated in the 1990s. However, the development of machine learning, widely popular today, occurred thanks to the development of the technologies mentioned above. When it comes to ideas, we have not moved on fundamentally in the past 20 30 years. I have not seen any revolutionary idea that would considerably change the development of AI. We all rather work on the fundaments laid in the past, and we are improving them.
Let us use the very popular algorithm LSTM, or long short-term memory, as an example. This is a type of deep learning based on neural networks, which is used to process sequential data, i.e. mainly to process images and sound. It is exactly this algorithm that is used when creating fake videos, so-called deep fakes, which are very popular now, too. The video with Barack Obama spread around the world and in which he says something he never said in reality, was produced in part with an algorithm invented by two people at Graz University back in 1997. However, the algorithm didntt become popular until recently. In the 1990s, most people and companies could not afford a computing device that would manage such a performance. Today, it is much more available.
There is an older approach that is effective and is often used in robotics or in managing in industry; these are genetic algorithms. This approach is inspired by replication and the division of cells, which sometimes involve mutations. Similarly, with an algorithm we define some population, we change the functions in time, and watch how these changes, or mutations, change the results.
I can use a slightly bizarre example from the stock exchange. In order to predict how the development of stock exchange will look overtime, we need to follow numerous indices. We seek the right function for when it is the best moment to conclude a deal, to sell shares, etc. For this, we need to follow many parameters and search for balance between them depending on previous data. Thus, we create a function, enter input data, and detect how the whole system functions and how it changes. Afterwards, we make changes to this function, i.e. the mutations, and watch how the system has changed and how it is developing. We do this again and again, until we find a suitable model, which will at least partially represent reality.
There is an approach called good-old-fashioned AI. In Slovak, it is usually defined as an expert system. In this case, an expert who understands AI defines fixed rules, according to which the programme will behave. These rules can change in the process, but they will again be changed by the expert, not by the AI itself. Thus, this is human supervision of artificial intelligence.
Another popular approach is represented by Markov chains, whose fundaments were laid by Russian mathematician Andrey Markov in the beginning of the 20th century, and nowadays, are widely used as statistical models of real processes. They are used in robotics and finances to optimise the queues at airports, as well as in the PageRank algorithm of the Google browser. These methods have become the base for the area of machine learning known as reinforcement learning. Reinforcement learning, combined with expert systems, was used, for example, for AlphaGo.
For instance, the media broke the story about artificial intelligence defeating the best player in the game called Go. AI Watson from IBM was also highly publicised. These forms of artificial intelligence combine machine learning and expert systems. Their use is limited, but they have an excellent understanding of defined boundaries. However, that is all. The bottom line is that we have AI that can defeat someone in games but that cannot make decisions in other spheres.
Watson, for instance, is good at putting things into context. The paradox, though, is that it cannot decide based on what it has discovered. So, it is not a conscious or purposeful activity. Watson is great when diagnosing an MRI. The analysis implies that it has a higher rate of success than most radiologists. This is understandable, as a radiologists effectiveness is derived from their experience, from how many X-rays they have already seen. This is, essentially, machine learning.
Moreover, radiologists decision-making is often impacted by human weaknesses like fatigue, current mood, or whether the person is hungry or thirsty. Watson is not affected by such factors. It is enough to pour thousands or millions of X-ray images classified as good, bad, or whatever into the AI. Based on these images, Watson is able to predict diagnosis with a very high success rate. It has the capacity to see more images than a single radiologist can see in their lifetime. Moreover, it can even recognise a change in a single pixel, which is close to impossible for humans.
ML is popular because it is apt for a wide scale of tasks we face in everyday life. For instance, it can make a prediction based on previous data and look for anomalies, and it has computer vision, which has been functioning for many years. So, ML is not popular because it is the best form of AGI.
Many wise people even consider ML a deadlock. Returning to the example with the cup I mentioned: if you want to teach a child what a cup is, you do not show them a million cups. The human brain does not work like this.
This is a good question. There are two ways to view this question. What does an insufficient computing performance mean? Some researchers claim that if we wanted to translate the human brain into a computer, we could model it into the currently best-performing computer globally. One human brain into the best-performing and most expensive super-computer - that does not seem very effective.
This is another problem. An approximate estimate can be made. The performance of modern computers is calculated in units called FLOPS (Floating-point Operations Per Second). Roughly said, this is the number of operations a computer can do in a decimal point of a second. The best-performing super-computers have calculating performance in tens of peta-flops, or something crazy like that. In other words, these super computers can calculate simulations of the atmosphere and nuclear explosions with complex equations with a billion parameters. The human brain would do such calculations with only the estimated speed of 0.001 FLOPS. But the human brain can do other things todays computers would not be able to simulate, as they are too complicated.
This is not the focal point; it is centred more on consciousness or deciding. These are things we do not understand properly. We do not know how consciousness works, but we do know that it does not work in the way ML does. Nobody looks at 500,000 photos of cats to recognise a cat on the street. That is why we cannot speak about AI yet; there are many challenges, and we are just talking about simulating a single average human brain.
We have just touched on it: we do not know how consciousness works. Nobody knows what consciousness is. There are philosophical definitions but we lack a mathematical model we would be able to use. We have no clear definition.
This is another question in this debate. If we want to achieve full-fledged AI, it has to have consciousness. Without it, it will be mere list of rules which the machine follows, but its decision-making is not independent. For the AI to be independent, it has to have its own ability to think, just like a human. And, of course, we humans make mistakes as well, so if AI is designed by us, it will probably not have perfect thinking and will probably make mistakes, just like us. And if it did not make them, we would already talk about super intelligence.
If we talk about creating AI like human intelligence, then it should have all these requirements, like a personality, and should make mistakes and learn lessons from them.
Exactly. Each form of artificial intelligence develops in a certain way. One has grown up in one laboratory, another in a different one. This development is distinctive, and different AI learn on different inputs, offering a different kind of evaluation, just like humans. Otherwise, we would be talking about super intelligence, which always gives perfect outputs.
I do not think so. There is a group of people who dream about it, but we also know of quite a big group that does not agree with it. This group includes many respected people, like the late Stephen Hawking, Elon Musk, and Bill Gates, who do not think we should not take this path.
I see it pragmatically. Why should humans do monotonous, boring things, if a computer can do it better? I would rather my MRI be evaluated by a really good computer with a high success rate than an average or below-average radiologist.
Thus, the key is what AI are we discussing? Do we just want intelligent help from a computer, artificial general intelligence, or super intelligence? From my point of view, now the main goal is to create AI that helps us with problems we cannot solve, or solves them much more effectively. We have not made any further progress on this.
Yes, it is possible. The question rather is do we want it at all? As I already indicated, I see it from a practical point of view. Elements of AI can help us greatly in everyday life, and that is exactly what we here, in ESET, are working on. Why not use it?
Juraj Jnok received a Bachelor's degree in applied informatics and a Masters degree in robotics at the Slovak University of Technology in Bratislava. In 2008, he joined the ESET company as an analyst of the malicious code. Since 2013, he has been leading the team responsible for automatic detection of threats and artificial intelligence. He is currently responsible for integrating machine learning into the detection kernel. He regularly lectures at specialist conferences around the world.
28. Oct 2019 at 6:00
See the rest here:
Posted in Artificial Intelligence
Comments Off on Artificial intelligence expert: True artificial intelligence should also have a consciousness, but we are far from that – The Slovak Spectator
Why Fujifilm SonoSite is betting the future of ultrasound on artificial intelligence – GeekWire
Posted: at 5:48 am
Fujifilm SonoSite CEO Richard Fabian holds up the SonoSite 180, the companys first mobile ultrasound device that debuted in 1998, during the Life Science Washington Summit on Oct. 25, 2019. (GeekWire Photo / James Thorne)
Decades of technological advances have led to a revolution in ultrasound machines that has given rise to modern devices that weigh less than a pound and can display images on smartphones. But they still require an expert to make sense of the resulting images.
Its not as easy as it looks, said Richard Fabian, CEO of Fujifilm SonoSite, a pioneer of ultrasound technologies. A slight movement of your hand means all the difference in the world.
Thats why SonoSite is focused on a future in which artificial intelligence helps healthcare workers to make sense of ultrasounds in real-time. The idea is that computers can be trained to identify and label critical pieces of a medical image to help clinicians get answers without the need for specially-trained radiologists.
Using AI you can really quickly interpret whats going on. And the focus is on accuracy, its on confidence, and its on expanding ultrasound users, Fabian said during a talk at Life Science Washingtons annual summit in Bellevue, Wash. on Friday.
Bothell, Wash.-based SonoSite recently partnered with the Allen Institute for Artificial Intelligence (AI2) in Seattle on an effort to train AI to interpret ultrasound images. To train the models, SonoSite is using large quantities of clinical data that it gathered with the help of Partners HealthCare, a hospital in Boston.
Artificial intelligence has shown promise in interpreting medical imaging to diagnose diseases like early-stage lung cancer,breast cancerandcervical cancer. The advancements have drawn tech leaders including Google and Microsoft, who hope their AI and cloud capabilities can one day be an essential element of healthcare diagnostics.
SonoSite was initially launched with the idea of creating portable ultrasounds for the military. Its lightweight units are widely used by healthcare teams in both low-resource settings and emergency rooms.
Ultrasound imaging is significantly more affordable and portable than X-ray imaging, CT scans or PET scans, without the risk of radiation exposure. While the images it provides are not as clear, researchers think deep learning can make up some of that difference.
AI2 researchers are in the process of training deep learning models on ultrasound images in which the veins and arteries have been labeled by sonographers. One application of the AI-powered ultrasound would be to help clinicians find veins much faster and more accurately.
Fabian also gave the example of AI models labeling things such as organs and fluid build-ups inside the body, which could inform care decisions without the need for specialists. He thinks that future ultrasounds could deliver medical insights without ever displaying an image.
If ultrasound becomes cheap enough, it could become a patch [that gives] you the information that you need, said Fabian.
The rest is here:
Why Fujifilm SonoSite is betting the future of ultrasound on artificial intelligence - GeekWire
Posted in Artificial Intelligence
Comments Off on Why Fujifilm SonoSite is betting the future of ultrasound on artificial intelligence – GeekWire
Use artificial intelligence to create rather than to kill, says AI pioneer – Euronews
Posted: at 5:48 am
From self-driving cars to Amazons Alexa, artificial intelligence is already here.
But some fear AI could be used to create killer robots.
Nobel Peace laureate Jody Williams is helping to lead a campaign for a new international treaty to prohibit lethal autonomous weapons, which select targets to fire without consultation from a human being.
Williams said at a news conference on Monday that killer robots "are crossing a moral and ethical Rubicon and should not be allowed to exist and be used in combat or in any other way".
While more and more governments are heavily investing in AI for military purposes the Pentagon alone has pledged up to two billion dollars on AI research Ben Goertzel, the man behind Sophia the Robot's brain, told Euronews that he hopes He hopes we use it for broadly beneficial applications like health, education.
I have no illusion I'm going to stop the militaries of the world from developing autonomous weapons systems," he said.
"I prefer to focus my own energy on more obviously broadly beneficial AI applications.
"So my hope is that you know more and more of the human smarts in the AI field and of the AI smarts (boffins) will go to discovering new things, educating kids and curing disease, he said.
I hope we don't end up with a situation where most of the AI in the world is going into something like a ton of autonomous weapons although I'm sure that's going to be their human society being what it is.
However, the AI expert pointed out that, in his view, advertising and surveillance rather than autonomous weaponry have been driving more of the AI research field.
Though, in his ideal world, AI would only be used in the "creative fields" such as science, maths, and arts.
Ultimately for Goertzel, AI should be about helping rather than destroying.
"What I think is more important is just to put more energy enthusiasm creativity and resources behind the more obviously beneficial applications and then you're sort of you're shifting where the centre of gravity of the human-AI interaction is."
In mid-November, the parties to the Convention on Conventional Weapons will be meeting in Geneva and they could agree to start negotiations on banning lethal weapons.
Original post:
Use artificial intelligence to create rather than to kill, says AI pioneer - Euronews
Posted in Artificial Intelligence
Comments Off on Use artificial intelligence to create rather than to kill, says AI pioneer – Euronews
Artificial intelligence startup Aegis AI rebrands as Actuate; launches new intruder-and-threat-detection AI solutions to keep the society safer from…
Posted: at 5:48 am
Aegis AI, an artificial intelligence startup that builds software which employs computer vision to automatically detect weapons in security camera feeds,today announced that its rebranding as Actuate and launched new AI threat-detection features.
Actuate was founded in early 2018 by University of Chicago MBAs Sonny Tai and Ben Ziomek. Tai is a former Marine Corps captain who spent his formative years in Johannesburg, South Africa, where gun violence rates are some of the highest in the world, while Ziomek brings deep data science and AI expertise gained from his time as a program manager at Microsoft.
The New York City-based Aegis Systems is a venture capital-backed AI startup that provides computer vision software to turn any security camera into a threat-detecting smart camera.Aegis AI system automatically detects firearms in existing security camera feeds, providing early warning and dramatically improving law enforcement response.
After careful analysis of the companys market positioning, Actuate leadership decided to adopt the new brand name in alignment with its new features, which expand the firms offerings beyond gun detection, the company said in a press release.
The new features include intruder- and threat-detection AI solutions. The Actuate system can now alert customers to unauthorized entry to customer facilities and catch individuals acting in a threatening manner even before weapons are fully visible.
Our new intruder- and threat-detection features help make our customers safer, said Ben Ziomek, Actuate chief product officer. Were excited to offer them to the market as we debut a new brand that highlights our expanded scope.
Actuate is also launching new, vertical-specific marketing content that targets healthcare, education, corporate, and public-sector customers as part of the rebrand.
The new Actuate logo modernizes the brand while maintaining the key colors and look-and-feel that defined the Aegis brand. It moves the brand from a focus on its defensive posture to a more general, enterprise-ready brand that opens the door to Actuates use as a building-management platform.
This rebrand recognizes that weve grown as a company, said Tai, Actuates chief executive officer. Im thrilled to announce new intruder- and threat-detection features, which we feel strongly contribute to our central goal of making society safer.
Actuate is an AI company that builds computer vision software to turn any security camera into an intruder- and threat-detecting smart camera, dramatically cutting the time it takes for law enforcement to respond to gun violence. Former Marine Sonny Tai, CEO of Aegis AI, and Ben Ziomek, CPO of Aegis AI, co-founded the company with the mission of addressing Americas gun violence epidemic. The product features include alerting customers to unauthorized entry to customer facilities, as well as catching individuals acting in a threatening manner even before weapons are fully visible.
Go here to see the original:
Posted in Artificial Intelligence
Comments Off on Artificial intelligence startup Aegis AI rebrands as Actuate; launches new intruder-and-threat-detection AI solutions to keep the society safer from…
Artificial Intelligence and the Information Lifecycle – Nextgov
Posted: at 5:48 am
The year is 1989 and were introduced to the World Wide Web. The Berlin Wall is coming down. The Exxon Valdez is spilling oil in Prince William Sound, Alaska. Students are calling for democracy and free speech in Tiananmen Square. Crockett and Tubbs are clearing the mean streets of Miami. A future pop star by the name of Taylor Swift is born. This all occurred 30 years ago, around the same time asif not more recently thana number of government systems were put into place.
Fast forward to 2019 and consider all the disruption that emerging technology is presenting to the federal government. Blockchain, quantum computing, internet of things, robotics, 5G the list goes on. What does this mean? When you consider the capabilities of these new technologies and 30, 40 or 50-year-old legacy systems, agencies are generating large volumes of records, information and data in multiple formatsphysical and digitalthat must be leveraged and stored effectively.
No matter the format, all this information is part of a lifecycle: Agencies create it, use it, store it and destroy it. Besides the sheer volume of information, this lifecycle process is no different now than 30 years ago. The question is, how can agencies better manage that lifecycle? And, what can they put into place to glean insights from the information wherever it is within that lifecycle?
Records Managers Can Help
In light of all this informationand pressing National Archives and Records Administration electronic records deadlinesgovernment records managers can help by:
1. Transforming to a New Way of Working
Records managers need to manage information in new ways. Many agencies today struggle with the efficiency of their records and information management (RIM) programs, requiring an investment in capital and resources. Agencies should move to a more optimized IT environment consisting of colocation and cloud services; automated business processes; and outsourcing of non-core processes. This will allow agencies to repurpose their space, reallocate their resources, and achieve a new level of digital maturity.
2. Mitigating Risk
The increase in the volume and variety of information that agencies are experiencing also exposes them to additional risk either from breach, cyberattack or loss. Agencies must mitigate this risk by not only securing where that information resides and how its accessed but also setting and enforcing retention policies, enabling them to know what and when they can defensibly destroy. This also better prepares them for audits or other compliance activities because they know what they have, where it resides and how long they must keep it.
3. Extracting Value
Lastly, by understanding the value of the information, agencies can make better decisions to drive their mission forward. Sixty percent of records, according to the Association for Information and Image Management, are unstructured, meaning that they are providing little value. And, it is estimated that organizations use only 5 to 10% of their overall data. An example of a way for agencies to extract value is to access information that may be available on old media that can be recovered and restored, then harness the power of the information to gain the insights they need to improve operations. Another example includes using AI to extract value out of information used to feed a current workflow, such as being able to pull unstructured data from forms or records that would otherwise require a manual process.
Artificial intelligence capabilities can be the driver behind this third area.
AI and RIM
Incorporating AI into the information lifecycle management function enables agencies to classify and extract information once, then reuse downstream; seamlessly integrate content typesfrom physical to digital; derive actionable insight from dark data (information collected and stored, but never used); as well as ensure the information is managed according to policy.
Agencies should consider using AI with machine-learning capabilities to automatically classify, extract and enrich physical and digital content. ML-based classification of an agencys physical (paper, tape) and digital (application-generated, human-generated) information adds structure, context and metadata to information to make it more predictable and usable. The resulting enriched content can then enable enhanced automation in terms of governance and workflow across the agency.
This can be accomplished by:
Ultimately, incorporating AI with a comprehensive information lifecycle management approach allows agencies to ingest multiple document formats into a single system, apply ML algorithms at the appropriate point, add or replace those algorithms as necessary, and offer deeper insights to achieve greater efficiencies and reduced risks.
Government agencies are generating significant amounts of information and records in both physical and digital formats. In order to enable agencies to better leverage all this information to support the mission, they need to consider an overall information lifecycle management approach that includes comprehensive AI and ML capabilities.
Sue Trombley is the managing director of global engagement for Iron Mountain.
Excerpt from:
Artificial Intelligence and the Information Lifecycle - Nextgov
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence and the Information Lifecycle – Nextgov