The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
AI Weekly: The sudden speed of technological change in a coronavirus world – VentureBeat
Posted: April 11, 2020 at 7:28 pm
COVID-19 waits for no one, and the speed of its spread has forced the world to act with unprecedented haste. From world governments to individual households, everyone has suddenly had to scrap plans, make new ones, and then try to hang on to some kind of new normal as the pandemic causes more unexpected and rapid shifts.
In the tech world, watching everyone move so fast has been quite a sight. As I wrote a few weeks ago, theres been a digital flotilla of tech people focusing on, and turning their expertise toward, solving the problems related to COVID-19. But with increased speed comes increased noise, which is a mixed blessing.
Like (I presume) all tech journalists right now, my inbox is more full than usual. Its brimming with endless pitches for chatbots to help answer peoples COVID-19 questions and even triage symptoms. Theres been an explosion of apps designed to track the spread of the coronavirus. There are pitches about robots and drones and autonomous vehicles that help in hospitals and deliver supplies. Were being told about all sorts of AI tools that claim to help medical providers diagnose COVID-19. Pitches for just about any company, product, or service that could conceivably be related to remote work have come our way.
There are also uplifting pitches about how this or that company is giving away its product or service for free, adapting it for a selfless purpose to help people in the wake of job losses and health scares, or marshaling resources to perform much-needed research.
It truly is encouraging, and at times downright inspiring, to see the wealth of new tools and techniques to track, prevent, treat, and in general fight COVID-19.
But finding the best parts and pieces amidst the unrelenting noise is a daily if not hourly challenge. As with all technology, hasty execution often invites privacy issues or poor security, like those Zoom has experienced even as its daily active user numbers (DAUs) have reached the stratosphere. Companies can also suddenly bump into regulatory hurdles or interoperability issues. (Fortunately, rapid cooperation between governments, researchers, and tech companies, and even between strange bedfellows such as Apple and Google, has proven possible.)
Another challenge is separating the do-gooders from the charlatans. When is a company truly being selfless and when are they just using the pandemic to slip in some positive marketing about their widget? When is it both? Its often hard to tell and one should generally be suspicious any time a for-profit company proclaims altruism but its even more difficult to discern amid the cacophony. Of course, even if there are knock-on or hidden benefits for companies that give away valuable things for free, its hard not to be pleased with what IBM, Google, smaller companies like Element AI, and many others have done to foster research and collaboration in the fight against COVID-19.
When things change overwhelmingly quickly, its usually a sign that we need more focus but that may be impossible in this climate. Taking a broad example, the Gates Foundation is funding manufacturing for multiple potential vaccine trials at once because, Bill Gates said in an interview with Trevor Noah on The Daily Show, theres no time to evaluate which vaccine has the greatest likelihood of success. Usually, a couple of the most promising vaccines would emerge from trials and then the foundation would throw its financial support behind manufacturing the best ones. Instead, its planning to waste a few billion dollars in the name of urgency.
This is a worldwide sprint and a marathon at the same time, and it can be tough to assimilate all the necessary information or in our case, to sift through the raft of news and analysis stories that come our way. But finding the signal through the noise is a skill we all have to acquire now, because combatting this global pandemic is the greatest challenge any of us has faced, and it wont wait for us to catch up.
See the article here:
AI Weekly: The sudden speed of technological change in a coronavirus world - VentureBeat
Posted in Ai
Comments Off on AI Weekly: The sudden speed of technological change in a coronavirus world – VentureBeat
Man creates hilarious AI version of himself to take his spot during Zoom calls – Mashable
Posted: at 7:28 pm
Video calls are a nice way to reconnect with people you haven't seen in a while or can no longer work face-to-face with, but they can really eat up a lot of time.
Matt Reed, an ambitious coder who would frankly rather be scrolling Reddit or working than calling into another boring Zoom meeting, found a way to cheat the system. Well, sort of.
Reed created an "AI Digital Twin" or a Zoombot if you will, and taught it speech recognition and text-to-speech conversation. Now, the fake version of himself is able to stand-in on Zoom calls and even respond to programed questions like "How are you?" and "Did you get that?" The only problem is it's very clear the Zoombot is a bot.
You can read all about how Matt built his AI twin here, and watch the video for a look at the hilarious Zoombot in action.
Originally posted here:
Man creates hilarious AI version of himself to take his spot during Zoom calls - Mashable
Posted in Ai
Comments Off on Man creates hilarious AI version of himself to take his spot during Zoom calls – Mashable
How AI is revolutionising the staffing industry – YourStory
Posted: at 7:28 pm
What once seemed like far away future is now a reality. The staffing industry is undergoing a revolution like never before. Almost all Fortune 500 companies are, in one way or other, using some form of automation. Many companies are trying to look asses applicants in completely new ways, employing artificial intelligence (AI) to find the very best talents.
In a survey by Deloitte, 33 percent of respondents said that they were using AI to simplify the recruitment process ultimately saving precious time and reducing the chances of human bias as well. There are several benefits of implementing AI on a larger scale in the staffing industry.
Lets discuss some of them.
There is a big pool of applicants out there that the staffing firms have to scamper through in order to find the right candidate. With the help of AI, these firms can collect more data on every candidate, which makes the evaluation process more efficient.
Also, AI can help better assess the skills of the candidates and match the right candidate with the job simplifying and speeding up the process.
In the corporate world, time is one of the most precious commodities and AI has greatly helped in saving time. AI-empowered software only requires a few seconds to analyse and evaluate large chunks of data, and deliver results which can then be studied by the people making the decisions.
Human decisions always have a certain degree of bias conscious or unconscious. Because of this, the decisions made during the recruitment process arent always fair.
AI solves this problem by electing only those candidates whose skills match with the job requirement helping both the candidate and the organisation.
AI has brought a revolution in the hiring process, the likes of which havent been seen before. The trend has really caught up in recent years and it seems that it is here to stay. Companies like HIREVUE are creating video recruiting platforms where AI bots are conducting interviews.
The advantage of an AI interviewer over a human interviewer is that it is not only capable of biometric and psychometric analysis, but it also assesses the candidates body language, vocal inflexions, and facial expressions. This deep level of analysis helps the recruiters better understand the candidates personality, intent, and confidence thereby aiding the recruitment process.
The traditional process of hiring includes posting job ads, interviewing applicants, and eventually selecting the most suitable candidate from the pool. The process, however, running solely on human power, is riddled with inaccuracies.
Posted randomly, the job ads may never even reach the right candidate. The right candidate might get rejected early for not having made the right resume. The interviewer might not be able to realise the full potential of the candidate during the interview. This is where AI can lend a helping hand.
Let us analyse the process step by step.
AI software can analyse large amounts of data from peoples search histories and post targeted ads. These ads will receive more appropriate responses and most interested candidates will be motivated to apply.
Once the recruitment process begins, AI-powered chatbots come into play. Chatbots can talk to the candidates, answer their basic queries and fill in the gaps in the resumes.
There has recently been an emergence of advanced bots that can study the behaviour of the candidate using Natural Language Processing (NLP) and asses if a candidate possesses the desired skills. This way, unsuitable candidates can be detected early and filtered out, which will save a lot of time and resources.
Still, there are some sceptics who are afraid of the intervention of AI in the hiring process. Many fear that AI cannot yet understand candidates like human interviewers do and many fear that they will end up losing their job to a computer program.
The latter is a legitimate fear as AI has proven itself to be way better than humans at repetitive tasks and its only a matter of time before it becomes better than humans at understanding humans themselves.
However, the implementation of AI doesnt have to mean the removal of humans from the staffing industry. In fact, by working in tandem with the AI, we can achieve much loftier goals.
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)
How has the coronavirus outbreak disrupted your life? And how are you dealing with it? Write to us or send us a video with subject line 'Coronavirus Disruption' to editorial@yourstory.com
More:
Posted in Ai
Comments Off on How AI is revolutionising the staffing industry – YourStory
R&D Roundup: Ultrasound/AI medical imaging, assistive exoskeletons and neural weather modeling – TechCrunch
Posted: at 7:28 pm
In the time of COVID-19, much of what transpires from the science world to the general public relates to the virus, and understandably so. But other domains, even within medical research, are still active and as usual, there are tons of interesting (and heartening) stories out there that shouldnt be lost in the furious activity of coronavirus coverage. This last week brought good news for several medical conditions as well as some innovations that could improve weather reporting and maybe save a few lives in Cambodia.
Arrhythmia is a relatively common condition in which the heart beats at an abnormal rate, causing a variety of effects, including, potentially, death. Detecting it is done using an electrocardiogram, and while the technique is sound and widely used, it has its limitations: first, it relies heavily on an expert interpreting the signal, and second, even an experts diagnosis doesnt give a good idea of what the issue looks like in that particular heart. Knowing exactly where the flaw is makes treatment much easier.
Ultrasound is used for internal imaging in lots of ways, but two recent studies establish it as perhaps the next major step in arrhythmia treatment. Researchers at Columbia University used a form of ultrasound monitoring called Electromechanical Wave Imaging to create 3D animations of the patients heart as it beat, which helped specialists predict 96% of arrhythmia locations compared with 71% when using the ECG. The two could be used together to provide a more accurate picture of the hearts condition before undergoing treatment.
Another approach from Stanford applies deep learning techniques to ultrasound imagery and shows that an AI agent can recognize the parts of the heart and record the efficiency with which it is moving blood with accuracy comparable to experts. As with other medical imagery AIs, this isnt about replacing a doctor but augmenting them; an automated system can help triage and prioritize effectively, suggest things the doctor might have missed or provide an impartial concurrence with their opinion. The code and data set of EchoNet are available for download and inspection.
View post:
Posted in Ai
Comments Off on R&D Roundup: Ultrasound/AI medical imaging, assistive exoskeletons and neural weather modeling – TechCrunch
How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls – VentureBeat
Posted: April 9, 2020 at 6:28 pm
Last month, Microsoft announced that Teams, its competitor to Slack, Facebooks Workplace, and Googles Hangouts Chat, had passed 44 million daily active users. The milestone overshadowed its unveiling of a few new features coming later this year. Most were straightforward: a hand-raising feature to indicate you have something to say, offline and low-bandwidth support to read chat messages and write responses even if you have poor or no internet connection, and an option to pop chats out into a separate window. But one feature, real-time noise suppression, stood out Microsoft demoed how the AI minimized distracting background noise during a call.
Weve all been there. How many times have you asked someone to mute themselves or to relocate from a noisy area? Real-time noise suppression will filter out someone typing on their keyboard while in a meeting, the rustling of a bag of chips (as you can see in the video above), and a vacuum cleaner running in the background. AI will remove the background noise in real time so you can hear only speech on the call. But how exactly does it work? We talked to Robert Aichner, Microsoft Teams group program manager, to find out.
The use of collaboration and video conferencing tools is exploding as the coronavirus crisis forces millions to learn and work from home. Microsoft is pushing Teams as the solution for businesses and consumers as part of its Microsoft 365 subscription suite. The company is leaning on its machine learning expertise to ensure AI features are one of its big differentiators. When it finally arrives, real-time background noise suppression will be a boon for businesses and households full of distracting noises. Additionally, how Microsoft built the feature is also instructive to other companies tapping machine learning.
Of course, noise suppression has existed in the Microsoft Teams, Skype, and Skype for Business apps for years. Other communication tools and video conferencing apps have some form of noise suppression as well. But that noise suppression covers stationary noise, such as a computer fan or air conditioner running in the background. The traditional noise suppression method is to look for speech pauses, estimate the baseline of noise, assume that the continuous background noise doesnt change over time, and filter it out.
Going forward, Microsoft Teams will suppress non-stationary noises like a dog barking or somebody shutting a door. That is not stationary, Aichner explained. You cannot estimate that in speech pauses. What machine learning now allows you to do is to create this big training set, with a lot of representative noises.
In fact, Microsoft open-sourced its training set earlier this year on GitHub to advance the research community in that field. While the first version is publicly available, Microsoft is actively working on extending the data sets. A company spokesperson confirmed that as part of the real-time noise suppression feature, certain categories of noises in the data sets will not be filtered out on calls, including musical instruments, laughter, and singing.
Microsoft cant simply isolate the sound of human voices because other noises also happen at the same frequencies. On a spectrogram of speech signal, unwanted noise appears in the gaps between speech and overlapping with the speech. Its thus next to impossible to filter out the noise if your speech and noise overlap, you cant distinguish the two. Instead, you need to train a neural network beforehand on what noise looks like and speech looks like.
To get his points across, Aichner compared machine learning models for noise suppression to machine learning models for speech recognition. For speech recognition, you need to record a large corpus of users talking into the microphone and then have humans label that speech data by writing down what was said. Instead of mapping microphone input to written words, in noise suppression youre trying to get from noisy speech to clean speech.
We train a model to understand the difference between noise and speech, and then the model is trying to just keep the speech, Aichner said. We have training data sets. We took thousands of diverse speakers and more than 100 noise types. And then what we do is we mix the clean speech without noise with the noise. So we simulate a microphone signal. And then you also give the model the clean speech as the ground truth. So youre asking the model, From this noisy data, please extract this clean signal, and this is how it should look like. Thats how you train neural networks [in] supervised learning, where you basically have some ground truth.
For speech recognition, the ground truth is what was said into the microphone. For real-time noise suppression, the ground truth is the speech without noise. By feeding a large enough data set in this case hundreds of hours of data Microsoft can effectively train its model. Its able to generalize and reduce the noise with my voice even though my voice wasnt part of the training data, Aichner said. In real time, when I speak, there is noise that the model would be able to extract the clean speech [from] and just send that to the remote person.
Comparing the functionality to speech recognition makes noise suppression sound much more achievable, even though its happening in real time. So why has it not been done before? Can Microsofts competitors quickly recreate it? Aichner listed challenges for building real-time noise suppression, including finding representative data sets, building and shrinking the model, and leveraging machine learning expertise.
We already touched on the first challenge: representative data sets. The team spent a lot of time figuring out how to produce sound files that exemplify what happens on a typical call.
They used audio books for representing male and female voices, since speech characteristics do differ between male and female voices. They used YouTube data sets with labeled data that specify that a recording includes, say, typing and music. Aichners team then combined the speech data and noises data using a synthesizer script at different signal to noise ratios. By amplifying the noise, they could imitate different realistic situations that can happen on a call.
But audiobooks are drastically different than conference calls. Would that not affect the model, and thus the noise suppression?
That is a good point, Aichner conceded. Our team did make some recordings as well to make sure that we are not just training on synthetic data we generate ourselves, but that it also works on actual data. But its definitely harder to get those real recordings.
Aichners team is not allowed to look at any customer data. Additionally, Microsoft has strict privacy guidelines internally. I cant just simply say, Now I record every meeting.'
So the team couldnt use Microsoft Teams calls. Even if they could say, if some Microsoft employees opted-in to have their meetings recorded someone would still have to mark down when exactly distracting noises occurred.
And so thats why we right now have some smaller-scale effort of making sure that we collect some of these real recordings with a variety of devices and speakers and so on, said Aichner. What we then do is we make that part of the test set. So we have a test set which we believe is even more representative of real meetings. And then, we see if we use a certain training set, how well does that do on the test set? So ideally yes, I would love to have a training set, which is all Teams recordings and have all types of noises people are listening to. Its just that I cant easily get the same number of the same volume of data that I can by grabbing some other open source data set.
I pushed the point once more: How would an opt-in program to record Microsoft employees using Teams impact the feature?
You could argue that it gets better, Aichner said. If you have more representative data, it could get even better. So I think thats a good idea to potentially in the future see if we can improve even further. But I think what we are seeing so far is even with just taking public data, it works really well.
The next challenge is to figure out how to build the neural network, what the model architecture should be, and iterate. The machine learning model went through a lot of tuning. That required a lot of compute. Aichners team was of course relying on Azure, using many GPUs. Even with all that compute, however, training a large model with a large data set could take multiple days.
A lot of the machine learning happens in the cloud, Aichner said. So, for speech recognition for example, you speak into the microphone, thats sent to the cloud. The cloud has huge compute, and then you run these large models to recognize your speech. For us, since its real-time communication, I need to process every frame. Lets say its 10 or 20 millisecond frames. I need to now process that within that time, so that I can send that immediately to you. I cant send it to the cloud, wait for some noise suppression, and send it back.
For speech recognition, leveraging the cloud may make sense. For real-time noise suppression, its a nonstarter. Once you have the machine learning model, you then have to shrink it to fit on the client. You need to be able to run it on a typical phone or computer. A machine learning model only for people with high-end machines is useless.
Theres another reason why the machine learning model should live on the edge rather than the cloud. Microsoft wants to limit server use. Sometimes, there isnt even a server in the equation to begin with. For one-to-one calls in Microsoft Teams, the call setup goes through a server, but the actual audio and video signal packets are sent directly between the two participants. For group calls or scheduled meetings, there is a server in the picture, but Microsoft minimizes the load on that server. Doing a lot of server processing for each call increases costs, and every additional network hop adds latency. Its more efficient from a cost and latency perspective to do the processing on the edge.
You want to make sure that you push as much of the compute to the endpoint of the user because there isnt really any cost involved in that. You already have your laptop or your PC or your mobile phone, so now lets do some additional processing. As long as youre not overloading the CPU, that should be fine, Aichner said.
I pointed out there is a cost, especially on devices that arent plugged in: battery life. Yeah, battery life, we are obviously paying attention to that too, he said. We dont want you now to have much lower battery life just because we added some noise suppression. Thats definitely another requirement we have when we are shipping. We need to make sure that we are not regressing there.
Its not just regression that the team has to consider, but progression in the future as well. Because were talking about a machine learning model, the work never ends.
We are trying to build something which is flexible in the future because we are not going to stop investing in noise suppression after we release the first feature, Aichner said. We want to make it better and better. Maybe for some noise tests we are not doing as good as we should. We definitely want to have the ability to improve that. The Teams client will be able to download new models and improve the quality over time whenever we think we have something better.
The model itself will clock in at a few megabytes, but it wont affect the size of the client itself. He said, Thats also another requirement we have. When users download the app on the phone or on the desktop or laptop, you want to minimize the download size. You want to help the people get going as fast as possible.
Adding megabytes to that download just for some model isnt going to fly, Aichner said. After you install Microsoft Teams, later in the background it will download that model. Thats what also allows us to be flexible in the future that we could do even more, have different models.
All the above requires one final component: talent.
You also need to have the machine learning expertise to know what you want to do with that data, Aichner said. Thats why we created this machine learning team in this intelligent communications group. You need experts to know what they should do with that data. What are the right models? Deep learning has a very broad meaning. There are many different types of models you can create. We have several centers around the world in Microsoft Research, and we have a lot of audio experts there too. We are working very closely with them because they have a lot of expertise in this deep learning space.
The data is open source and can be improved upon. A lot of compute is required, but any company can simply leverage a public cloud, including the leaders Amazon Web Services, Microsoft Azure, and Google Cloud. So if another company with a video chat tool had the right machine learners, could they pull this off?
The answer is probably yes, similar to how several companies are getting speech recognition, Aichner said. They have a speech recognizer where theres also lots of data involved. Theres also lots of expertise needed to build a model. So the large companies are doing that.
Aichner believes Microsoft still has a heavy advantage because of its scale. I think that the value is the data, he said. What we want to do in the future is like what you said, have a program where Microsoft employees can give us more than enough real Teams Calls so that we have an even better analysis of what our customers are really doing, what problems they are facing, and customize it more towards that.
See the article here:
Posted in Ai
Comments Off on How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls – VentureBeat
Will product designers survive the AI revolution? – The Next Web
Posted: at 6:28 pm
Did you know TNW Conference has a track fully dedicated to exploring new design trends this year? Check out the full Sprint program here.
Our intelligence is what makes us human, and AI is an extension of that quality. Yann LeCun
Thehuman species has performed incredible feats of ingenuity. We have created beautiful sculptures from a single block of marble, written enchanting sonnets that have stood for centuries and landed a craft on the face of a distant rock orbiting our planet. It is sobering then to think, that what separates us from our close, albeit far less superior cousins the chimpanzee, is a45% difference in our genomes.
I propose to you, however, that natures insatiable thirst for balance has ultimately led us to create a potential rival to our dominance as a species on this planetArtificial Intelligence. The pertinent question then becomes, what aspects of our infamous ingenuity will AI augment, and perhaps ultimately surpass?
What is AI & Machine Learning?
Essentially what some really smart people out there are trying to achieve, is a computer system that emulates human intelligence. This is the ability to make decisions that maximize the chance of the system achieving its goals. Even more important is the ability of the system to learn and evolve.
To achieve this, every system needs a starting point massive amounts of data. For example, in order to train a computer system to tell the difference between a cat and a dog, you would have to feed it with thousands of images of cats and dogs.
Read: [AI will never replace good old human creativity]
What is creativity?
Creativity is seeing what everyone else saw, and thinking what no one else thought Albert Einstein
Ive heard many people say a computer system could never be creative, and that to create art, music,or an ad campaign, one needs to feel, have a soul, and a lifetime of experiences to draw from.
Having spent over a decade in the advertising industry, I can confidently say that the best creatives I have seen, were usually the ones with the most exposure. The more you have seen, traveled or experienced, the more creative you tend to be.
Creativity is about challenging the norm,thinking differently, being the square pegs in the round holes, and evoking specific emotions in your audience. So how difficult can that be for AI to achieve? It certainly seems that in todays world, creativity is actually very arbitrary. Why? Because both this
and this
are considered valuable works of art.
The current state of AI vs Creatives
Link:
Will product designers survive the AI revolution? - The Next Web
Posted in Ai
Comments Off on Will product designers survive the AI revolution? – The Next Web
AI In The Enterprise: Reality Or Myth? – Forbes
Posted: at 6:28 pm
Artificial intelligence (AI) is one of the most talked-about new technologies in the business world today.
It's estimated that enterprise AI usage has increased 270% since 2015. This has coincided with a massive spike in investment, with the enterprise AI industry expected to grow to $6.1 billion by 2022.
Along with the technology's very real ability to transform the job market, exaggerated myths have also become common. The hype surrounding this branch of technology has led to a number of myths:
Myth No. 1: More Data Is The Key To AI's Success
While it's true that AI needs data in order to learn and operate efficiently, the idea that more data equals better outcomes is misleading. Not all data is created equal.
If the information fed to an AI program is labeled incorrectly or isn't relevant, it poisons the data pool. The more information AI has access to, the more precise its models and predictions will be. If the data itself is of poor quality, the outcome will be precise but not necessarily based on business reality. This can result in poor decision-making.
The truth is that the data fed to an AI solution needs to be curated and analyzed beforehand. Prioritize quality over quantity.
Myth No. 2: Companies See Immediate Value From AI investments
The integration of AI into standard operating procedures doesn't happen overnight. As seen in Myth No. 1, the data the AI uses needs to be curated and checked for relevance beforehand. This may significantly reduce the amount of information the AI has access to.
To obtain truly valuable returns, it's essential to continuously provide relevant data. Like humans, AI solutions need to be given time to learn. There may be a significant lag between when an AI-based initiative begins and when you see a return on investment.
Myth No. 3: AI Will Render Humans Obsolete
The purpose of AI is not to replace all human workers. AI is a tool businesses can use to achieve their goals. It can automate mundane processes and pull interesting insights from large data sets. When used correctly, it augments and aids human decision-making. AI provides recommendations based on trends gleaned from mountains of information. It may even pose new questions that have never been considered. A human still needs to weigh the information provided and make a final decision based on risk analysis.
Pointing out these myths in no way indicates that AI won't deliver on its transformational promise. It's easy to forget that enterprise AI adoption is still in its infancy. Even still, a 2018 Deloitte survey reported that 82% of executives said their AI projects had already led to a positive ROI. Those now implementing AI projects will be the case studies of the near future.
While there are sure to be growing pains, being on the cutting edge of this exciting technology should be beneficial. There's little doubt about how important it will be for the businesses of tomorrow. Getting a head start now, ironing out the wrinkles and locking down efficient processes will pay dividends.
Go here to see the original:
Posted in Ai
Comments Off on AI In The Enterprise: Reality Or Myth? – Forbes
AI streamlines acoustic ID of beluga whales – GCN.com
Posted: at 6:27 pm
AI streamlines acoustic ID of beluga whales
Scientists at the National Oceanic and Atmospheric Administration who study endangered beluga whales in Alaskas Cook Inlet used artificial intelligence to reduce the time they spend on analysis by 93%.
Researchers have acoustically monitored beluga whales in the waterway since 2008, but acoustic data analysis is labor-intensive because automated detection tools are relativelyarchaic in our field, Manuel Castellote, a NOAA affiliate scientist, told GCN. By improving the analysis process, we would provide resultssooner, and our research wouldbecome more efficient.
The analysis typically gets hung up in the process of validating the data because detectors pick up any acoustic signal that is similar to that of a beluga whales call or whistle. As a result, researchers get many false detections, including noise from vessel propellers, ice friction and even birds at the surface in shallow areas, Castellote said.
A machine learning model that could distinguish between actual whale calls and other sounds would provide highly accurate validation output and replace the effort of a human analyst going through thousands of detections to validatethe ones corresponding to beluga, he said.
The researchers used Microsoft AI products to develop a model with a deep neural network, a convolutional neural network, a deep residual network, and a densely connected convolutional neural network. The resulting detector that is an ensemble of these four AI models is more accurate than each of the independent models, Castellote said.
Heres how it works: Twice a year, researchers recover acoustic recorders from the seafloor. A semi-automated detector has been extracting the data and processing it, looking for tones in the recordings. It yields thousands sometimes hundreds of thousands of detections per dataset.
The team used the collection of recordings with annotated detections -- both actual beluga calls and false positives -- that it has amassed in the past 12 years to train the AI and ML tools.
Now, instead of having a data analyst sit in front of a computer for seven to 14 days to validate all these detections one by one, the unvalidated detection log is used by the ensemble model to check the recordings and validate all the detections in the log in four to five hours, Castellote said. The validated log is then used to generate plots of beluga seasonalpresence in each monitored location. These results are useful to inform management decisions.
With the significant time theyre saving, researchers can increase the number of recorders they send to the seafloor each season and focus on other aspects of data analysis, such as understanding where belugas feed based on the sounds they make when hunting prey, Castellote said. They can also study human-made noise to identify activity in the area that might harm the whales.
The team is now moving into the second phase of its collaboration with Microsoft, which involves cutting the semi-automated detector out of the process and instead applying ML directly to the sound recordings. The streamlined process will search for signals from raw data, rather than using a detection log to validate pre-detected signals.
This allows widening the detection process from beluga only to all cetaceans inhabiting Cook Inlet, Castellote said. Furthermore, it allows incorporating other target signals to be detected and classified [such as] human-made noise. Once the detection and classification processes are implemented, this approach will allow covering multiple objectives at once in our data analysis.
Castellotes colleague, Erin Moreland, will use AI this spring to monitor other mammals, too, including ice seals and polar bears. A NOAA turboprop airplane outfitted with AI-enabled cameras will fly over the Beaufort Sea scanning and classifying the imagery to produce a population count that will be ready in hours instead of months, according to a Microsoft blog post.
The work is in line with a larger NOAA push for more AI in research. On Feb. 18, the agency finalized the NOAA Artificial Intelligence Strategy. It lists five goals for using AI, including establishing organizational structures and processes to advance AI agencywide, using AI research in support of NOAAs mission and accelerating the transition of AI research to applications.
Castellote said the ensemble deep learning model hes using could easily be applied to other acoustic signal research.
A code module was built to allow retraining the ensemble, he said. Thus, any other project focused on different species (and soon human-made noise) can adapt the machine learningmodel to detect and classify signals of interest in their data.
Specifics about the model are available on GitHub.
About the Author
Stephanie Kanowitz is a freelance writer based in northern Virginia.
See more here:
Posted in Ai
Comments Off on AI streamlines acoustic ID of beluga whales – GCN.com
How AI will earn your trust – JAXenter
Posted: at 6:27 pm
In the world of applying AI to IT Operations one of the major enterprise concerns is a lack of trust in the technology. This tends to be an emotional rather than intellectual response. When I evaluate the sources of distrust in relation to IT Ops, I can narrow it down to four specific causes.
The algorithms used in AIOps are fairly complex, even if you are addressing an audience which has a background in computer science. The way in which these algorithms are constructed and deployed is not covered in academia. Modern AI is mathematically intensive and many IT practitioners havent even seen this kind of mathematics before. The algorithms are outside the knowledge base of todays professional developers and IT operators.
SEE ALSO: 3 global manufacturing brands at the forefront of AI and ML
When you analyse the specific types of mathematics used in popular AI-based algorithms, deployed in an IT operations context, the maths is basically intractable. What is going on inside the algorithms cannot be teased out or reverse engineered. The mathematics generates patterns whose sources cannot be determined due to the very nature of the algorithm itself.
For example, an algorithm might tell you a number of CPUs have passed a usage threshold of 90% which will result in end user response time degrading. Consequently, the implicit instruction is to offload the usage of some servers. When you have this situation, executive decision makers will want to know why the algorithm indicates there is an issue. If you were using an expert system it could go back and show you all the invoked rules until you reverted back to the original premise. Its almost like doing a logical inference in reverse. The fact that you can trace it backwards lends credibility and validates the conclusion.
What happens in the case of AI is that things get mixed up and switched around, which means links are broken from the conclusion back to the original premise. Even if you have enormous computer power it doesnt help as the algorithm loses track of its previous steps. Youre left with a general description of the algorithm, the start and end data, but no way to link all these things together. You cant run it in reverse. Its intractable. This generates further distrust, which lives on a deeper level. Its not just about not being familiar with the mathematical logic.
Lets look at the way AI has been marketed since its inception in the late 1950s. The general marketing theme has been that AI is trying to create a human mind, when this is translated into a professional context people view it as a threat to their jobs. This notion has been resented for a long time. Scepticism is rife but it is often a tactic used to preserve livelihoods.
How AI has been marketed as an intellectual goal and a meaningful business endeavour, lends credibility to that concern. This is when scepticism starts to shade into genuine distrust. Not only is this technology that may not work, it is also my personal enemy.
IT Operations, in terms of all the various enterprise disciplines, is always being threatened with cost cutting and role reduction. Therefore, this isnt just paranoia, theres a lot of justification behind the fear.
IT Operations has had a number of bouts with commercialized AI which first emerged in the final days of the cold war when a lot of code was repackaged and sold to the IT Ops as it was a plausible use case. Many of the people who are now in senior enterprise positions, were among the first wave of people who were excited about AI and what it could achieve. Unfortunately, AI didnt initially deliver on expectations. So for these people, AI is not something new, its a false promise. Therefore, in many IT Operations circles there is a bad memory of previous hype. A historical reason for scepticism which is unique to the IT Ops world.
These are my four reasons why enterprises dont trust AIOps and AI in general. Despite these four concerns, the use of AI-based algorithms in an IT Operations context is inevitable, despite the distrust.
Take your mind back to a very influential Gartner definition of big data in 2001. Gartner came up with the idea of the 3Vs. The 3Vs (volume, variety and velocity) are three defining properties or dimensions of big data. Volume refers to the amount of data, variety refers to the number of types of data and velocity refers to the speed of data processing. At the time the definition was very valuable and made a lot of sense.
The one thing Gartner missed is the issue of dimensionality i.e. how many attributes a dataset has. Traditional data has maybe four or five attributes. If you have millions of these datasets, with a few attributes, you can store them in a database and it is fairly straightforward to search on key values and conduct analytics to obtain answers from the data.
However, when youre dealing with high dimensions and a data item that has a thousand or a million attributes, suddenly your traditional statistical techniques dont work. Your traditional search methods become ungainly. It becomes impossible to formulate a query.
As our systems become more volatile and dynamic, we are unintentionally multiplying data items and attributes which leads me onto AI. Almost all of the AI techniques developed to date are attempts to handle high dimensional data structures and collapse them into a smaller number of manageable attributes.
When you go to the leading Universities, youre seeing fewer courses on Machine Learning, but more geared towards embedding Machine Learning topics into courses on high dimensional probability and statistics. Whats happening is that Machine Learning per se is starting to resemble practical oriented bootcamps, while the study of AI is now more focussed on understanding probability, geometry and statistics in relation to high dimensions.
How did we end up here? The brain uses algorithms to process high dimensional data and reduces it to low dimensional attributes, it then processes and ends up with a conclusion. This is the path AI has taken. Lets codify what the brain is doing and you end up realizing that what youre actually doing is high dimensional probability and statistics.
SEE ALSO: Facebook AIs Demucs teaches AI to hear in a more human-like way
I can see discussions about AI being repositioned around high dimensional data which will provide a much clearer vision of what is trying to be achieved. In terms of IT operations, there will soon be an acknowledgement that modern IT systems contain high volume, high velocity and high variety data, but now also high dimensional datasets. In order to cope with this were going to need high dimensional probability and statistics and model it in high dimensional geometry. This is why AIOps is inevitable.
More here:
Posted in Ai
Comments Off on How AI will earn your trust – JAXenter
Surge in Remote Working Leads iManage to Launch Virtual AI University for Companies that Want to Harness the Power of the RAVN AI Engine -…
Posted: at 6:27 pm
CHICAGO, April 09, 2020 (GLOBE NEWSWIRE) -- iManage, the company dedicated to transforming how professionals work, today announced that it has rolled out a virtual Artificial Intelligence University (AIU), as an adjunct to its customer on-site model. With the virtual offering, legal and financial services professionals can actively participate in project-driven, best-practice remote AI workshops that use their own, real-world data to address specific business issues even amidst the disruption caused by the COVID-19 outbreak.
AIU helps clients to quickly and efficiently learn to apply machine learning and rules-based modeling to classify, find, extract and analyze data within contracts and other legal documents for further action, often automating time-consuming manual processes. In addition to delivering increases in speed and accuracy of data search results, AI frees practitioners to focus on other high-value work. Driven both by the need of organizations to reduce operational costs and to adapt to fundamental shifts toward remote work practices, virtual AIU is playing an important role in helping iManage clients continue to work and collaborate productively. The curriculum empowers end users with all the skills they need to quickly ramp up the efficiency and breadth of their AI projects using the iManage RAVN AI engine.
Participating in AIU was a huge win for us. We immediately saw the impact AI would have in surfacing information we need and allowing us to action it to save time, money and frustration, said Nikki Shaver, Managing Director, Innovation and Knowledge, Paul Hastings. The workshop gave us deep insight into how to train the algorithm effectively for the best possible effect. And, very quickly, more opportunities came to light as to how AI could augment our business in the longer term, continued Shaver.
AI is a transformation technology thats continuing to gain momentum in the legal, financial and professional services sectors. But many firms dont yet have the internal knowledge or training to deliver on its promise. iManage is committed to helping firms establish AI Centers of Excellence not just sell them a kit and walk away, said Nick Thomson, General Manager, iManage RAVN. Weve found the best way to ensure client success is to educate and build up experience inside the firm about how AI works and how to apply it to a broad spectrum of business problems.
Deep Training Delivers Powerful Results
iManage AIUs targeted, hands-on training starts with the fundamentals but delves much deeper enabling organizations to put the flexibility and speed of the technology to work across myriad scenarios. RAVN easily helps facilitate actions like due diligence, compliance reviews or contract repapering, as well as more sophisticated modeling that taps customized rule development to address more unique use cases.
The advanced combination of machine learning and rules-based extraction capabilities in RAVN make it the most trainable platform on the market. Users can teach the software what to look for, where to find it and then how to analyze it using the RAVN AI engine.
Armed with the tools and training to put AI to work across their data stores and documents, AIU graduates can help their organizations unlock critical knowledge and insights in a repeatable way across the enterprise.
Interactive Curriculum Builds Strong Skillsets
The personalized, interactive course is delivered over three half-day sessions, via video conferencing, to a small team of customer stakeholders. Such teams may include data scientists, knowledge managers, lawyers, partners, contract specialists, and trained legal staff. AIU is also available to firms that are considering integrating the RAVN engine and would like to see AI in action as they assess the potential impact of the solution on their businesses.
Expert iManage AI instructors, with deep technology and legal expertise, work with clients in advance to help identify use cases for the virtual AIU. The iManage team fully explores client use cases prior to the training to facilitate the most effective approach to extraction techniques for client projects.
The daily curriculum includes demonstrations with user data and individual and group exercises to evaluate and deepen user skills. Virtual breakout rooms for project drill down and feedback mechanisms, such as polls and surveys, help solidify learning and make the sessions more interactive. Recordings and transcripts allow customers to revisit AIU sessions at any time.
For more information on iManage virtual AIU or on-site training read our AI blog post or contact us at AIU@imanage.com.
Follow iManage via: Twitter: https://twitter.com/imanageinc LinkedIn: https://www.linkedin.com/company/imanage
About iManageiManage transforms how professionals in legal, accounting and financial services get work done by combining artificial intelligence, security and risk mitigation with market leading document and email management. iManage automates routine cognitive tasks, provides powerful insights and streamlines how professionals work, while maintaining the highest level of security and governance over critical client and corporate data. Over one million professionals at over 3,500 organizations in over 65 countries including more than 2,500 law firms and 1,200 corporate legal departments and professional services firms rely on iManage to deliver great client work securely.
Press Contact:Anastasia BullingeriManage +1.312.868.8411press@imanage.com
Read the rest here:
Posted in Ai
Comments Off on Surge in Remote Working Leads iManage to Launch Virtual AI University for Companies that Want to Harness the Power of the RAVN AI Engine -…