The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Category Archives: Ai
Otter.ai slashes free monthly transcription minutes to 300, but opens recorder bot to all – TechCrunch
Posted: August 20, 2022 at 2:19 pm
Theres good news and bad news for users of the Otter.ai transcription service. The good news is that Otter Assistant a bot that can be configured to record meetings automatically will now be available to everyone, regardless of whether theyre a free or paid user.
The bad news, however, is that Otter.ai is scaling back on some features, like the number of monthly transcription minutes available for basic and pro accounts.
Otter.ai first launched its bot to automatically record Zoom meetings last May, though it later added support for Google Meet, Microsoft Teams and Cisco Webex. The assistant integrates with the users calendar, and automatically joins any scheduled meeting, records it and shares the transcription with everyone in the meeting. So even if someone cant attend a meeting, they can at least listen back to it and peruse the notes later.
The feature was originally only available to subscribers on the business plan, but starting September 27 it will be available to Free and Pro accounts too. However, those who pay for a Pro account will be able to ask the Otter Assistant to join two concurrent meetings.
Whats more, the companys AI-generated meeting summary feature which was introduced in March will be available to both Basic and Pro account users too.
While users are gaining these features, the company is restricting things like transcription minutes per-month for both Basic and Pro accounts. Heres a rundown of whats changing:
Otter Basic (free tier)
Otter Pro
But thats not all. Otter Pros monthly subscribers will have to pay $16.99 per month instead of $12.99 starting September 27, though they will get to use their accounts with the current limits until November 30. The annual plan will still cost $99.99 ($8.33 per month), so if users subscribe to that plan before September 27, current feature limits will apply until next year.
Clearly, the company, which raised $50 million in a Series B round last year, is coercing users to commit to the yearly plan.
New features offered and limitations of Otter basic and Pro accounts. Image Credits: Otter
While more business-centric alternatives such as Dialpad have enjoyed massive success, with this latest move, it seems that Otter.ai is trying to appease the more casual user while also trying to boost its revenues by encouraging users to upgrade their plans to get the same features that theyre accustomed to.
Other alternatives such asTLDV, meanwhile, offer unlimited recording and transcription for free users, a fact that could help lure current Otter.ai stalwarts over to its platform.
Continue reading here:
Posted in Ai
Comments Off on Otter.ai slashes free monthly transcription minutes to 300, but opens recorder bot to all – TechCrunch
How Microsoft’s AI convinced me to switch to Edge, and where the browser still falls short – GeekWire
Posted: at 2:19 pm
Microsofts Edge browser comes with a built-in Read aloud feature. (GeekWire Illustration)
I finally broke down and switched to Microsofts Edge browser this week on my Windows PC, after many years of using Google Chrome.
No, it wasnt the incessant and annoying prompts in Windows 11, urging me to make Edge my default, although the nagging did keep the Microsoft browser top of mind.
For me, the tipping point was Edges built-in Read aloud feature, and what sounds to my ears like major advances in some of Microsofts synthesized voices, to the point that theyre almost indistinguishable from human narrators.
Ive long been a fan of text-to-speech for listening to articles and long emails.
Ive used various apps and browser plugins over the years, some of them more seamless than others.
Microsoft Edges Read aloud feature is controllable directly from a web page, after activating it from a menu accessible under the three dots in the upper right of the browser frame, or by right-clicking on the text.
As it reads, you can click on the actual text on the page to go to a particular section.
As with most automated text-to-speech technologies, sometimes you do have to put up with some minor annoyances, such as the voice reading fine print, menu items or disclaimers on a site. The ability to select the text to be read, or jump around by clicking on the text, helps to overcome that when listening via the browser.
Significant improvement in voice quality: But the grabber for me is the increasing authenticity of some of the Microsoft voices: the inflections, the pauses, the lack of the tell-tale robotic vocal fry. For example, here is Microsoft Michelle Online (Natural) reading this paragraph.
Its not perfect. The AI can still sound briefly robotic. Unusual names can also cause problems. Reading this story today about Geocaching by my colleague Kurt Schlosser, for example, Michelle pronounces it Geo-coshing.
Still, the quality is much better than the drone voices that had my friends and colleagues making fun of my attempts to use text-to-speech tools in the past.
Microsoft Edges features for importing data and passwords, standard in browsers these days, made the switch relatively easy. Edges use of Chromium, the underlying open-source engine that powers Chrome, also helped ease the transition. Edge debuted in 2015, and the company officially retired Internet Explorer this year.
Mobile syncing benefits and bugs: The feature is also available in the Edge browser for smartphones, and it works well there. You can access read aloud by clicking on the three dots at the bottom of the Edge mobile app.
But this also shows where Microsoft is falling short. Edges Collections feature for saving web pages is supposed to sync across PCs and mobile devices when logged in via Microsoft account. Ive set up a Read Later collection where, theoretically, I can save articles in my PC browser for the AI to read aloud later in the Edge app on my Android phone.
The articles do save in my PC browser, but my Edge Collections wont sync to my phone. Ive checked all the settings, gone through all the troubleshooting steps, without any luck. All of my other data is syncing. This appears to be a problem for many others, as well.
Ill keep trying to find a fix, and Ill update this story if I do. Even if it is a case of user error, it shouldnt be this hard.
Amazon Alexa and audiobooks: This is probably a subject for another post, but Im also a fan of Amazon Alexas feature for reading Kindle books on Echo devices, but the implementation in my experience is less than ideal, frequently forgetting where you were when you stopped having Alexa read to you.
Its going to be fascinating to see the impact that the growing authenticity of synthetic voices has on Amazons Audible audiobook subsidiary in the years ahead.
In the meantime, if anyone out there has any feedback, ideas, or different approaches for making the most of text-to-speech technology in your daily work, please let me know via Twitter, LinkedIn or my email address below.
See original here:
Posted in Ai
Comments Off on How Microsoft’s AI convinced me to switch to Edge, and where the browser still falls short – GeekWire
Empowering the AI Enterprise in the 5G Era – RTInsights
Posted: at 2:19 pm
The proliferation of 5G wireless networks will encourage communities of AI application developers to create new solutions that take advantage of 5G speed and low latency.
Cellular communications have reached the next stage of evolution, supporting real-time, machine-to-machine connectivity as well as telephone services. 5G or fifth-generation wireless technology has increased the speed and responsiveness of wireless networks so data can be transmitted at multigigabit speeds, with theoretical speeds up to 20 Gbps. In fact, 5G wireless networks exceed the speed of most wired networks. The increased performance of 5G wireless connectivity also enables complex computing applications like artificial intelligence (AI), creating new opportunities for wireless applications. Adding no-code AI development brings an even greater potential to 5G networking.
The availability of 5G high-speed wireless technology promises the expansion of technologies such as the Internet of Things (IoT), which connects devices. Using 5G wireless systems and IoT sensors, you can remotely monitor traffic conditions, monitor patients, monitor conditions to activate farm irrigation systems, and manage autonomous things like smart cars or drones. You can improve the performance of 5G networks even further with edge computing, where you place the computing resources needed to process application data closer to the location where the data is used to minimize response time.
5G wireless infrastructures can easily support AI processing. Where 5G simplifies and speeds up the integration of multiple technologies, AI gives those systems intelligence and the ability to learn from the available data. AI simulates human intelligence by applying three cognitive skills: learning which includes acquiring data and applying rules, reasoning by choosing the right data to create the desired result, and self-correction, which is fine-tuning data sorting for more accurate results.
The marriage of 5G and AI is already shaping the future of wireless ecosystems, and with no-code AI development platforms, applications that take advantage of AI and machine learning (ML) will increase. Putting AI development in the hands of IT professionals and domain experts will eliminate the need for data scientists to automate next-generation wireless applications.
See also: Manufacturing Ahead of the Curve via AI, 5G, and Edge
AI is already an integral part of cellular networking. Mobile communications services use AI to optimize network performance and reduce capital expenditures. 5G service providers use AI to improve customer service and provide personalized support. Service providers also use AI for 5G network planning, network performance management, lifecycle management, and revenue management.
AI is also having a direct impact on 5G network performance. While phones have become smarter, the core smartphone algorithms havent changed since the 1990s. As a result, 5G systems use more power and achieve lower data rates than expected. Using AI-enabled algorithms with deep learning capabilities reduces power consumption and improves performance. AI also is being used to resolve issues related to available RF frequencies for cellular signals to alleviate bandwidth overcrowding.
Low latency also makes 5G networks ideal for applications that benefit from faster response times, such as real-time video, which depends on AI to function. Using AI/ML for 5G systems also powers proactive network responses, creating dynamic cellular clusters that use learned data to improve latency, efficiency, and reliability.
While AI/ML is being used to power a new generation of 5G applications as well as the 5G infrastructure. No-code developer tools will make these AI/ML applications available to everyone.
There is an ongoing shortage of AI developers. To bridge the skills gap, no-code AI development tools are putting the power of article intelligence in the hands of IT managers and subject-matter experts. No-code platforms abstract AI complexity to make building, experimenting, and deploying AI/ML software easier. Rather than requiring data scientists with deep AI expertise, no-code AI platforms provide a drag-and-drop interface so any enterprise manager can create their own AI applications.
The proliferation of 5G wireless networks will encourage communities of AI application developers to create new solutions that take advantage of 5G speed and low latency. No-code AI development platforms will promote innovation. A growing user community and an open architecture platform will make 5G more accessible and enable new, best-in-class models and tools that can be incorporated into AI/ML workflows.
Organizations are building private 5G networks as well. Organizations with a large number of IoT sensors or endpoints benefit from private 5G because a dedicated network delivers ultra-low latency and extremely high bandwidth. 5G was initially designed to accommodate massive sensor grids for IoT applications. IoT devices and sensors generate large volumes of data, so a conventional core network or centralized cloud infrastructure cant handle the traffic. Using edge computing for low latency is ideal for AI applications that need real-time efficiency.
While 5G technology is proving ideal for real-time applications, there are still obstacles that are slowing 5G adoption. There are different flavors of 5G, each with different performance characteristics. Carriers are still building towers, power stations, data centers, sensors, and infrastructure to make 5G available to everyone in the United States, but it could take years to complete a 5G infrastructure.
In the meantime, entrepreneurs are stepping in to solve the complex problems posed by 5G networking. Innovators backed by investors and government policies are creating easy-to-use AI solutions that take advantage of the existing 5G infrastructure. The growth of no-code AI/ML development tools will lower the barriers to entry for 5G innovators and empower a new class of AI users.
Excerpt from:
Posted in Ai
Comments Off on Empowering the AI Enterprise in the 5G Era – RTInsights
One day consoles will have a ‘giant AI chip and all the games will be dreams’ – PC Gamer
Posted: at 2:19 pm
The founder and CEO of Midjourney, David Holz, has some truly inspiring views around how AI image generation will transform the gaming industry. During the short time we spoke this week, I had to hold myself back from falling too deep into the AI rabbit hole. In the process, I discovered Holz's view on how this kind of tech will develop and how it's likely to benefit the gaming industry, as well as human creativity as a whole.
Holz believes that one day in the near future, "you'll be able to buy a console with a giant AI chip and all the games will be dreams."
It's a beautiful sentiment for sure, but it's the physics of current technology that's holding us back from exploring the full potential of AI in games. Right now these kinds of AI generators use excruciating amounts of graphical power (opens in new tab), and it's just not practical for the kind of utopian visions Holz and I have dreamed of.
He tells me that Midjourney's produces images (opens in new tab) using algorithms that "all run on the cloud, and they're running on very big GPUslike $40,000 GPU servers I think it's fair to say that it's the most compute-heavy consumer application that's ever existed."
That's a lot of energy and a great deal of money to sink into anything, but Holz truly believes in the benefits of the technology Midjourney is pioneering.
He tells me it's already being used as a way of self soothing after a traumatic event. "Some of them are actually using the AI in a purely therapeutic process. And it's hard to understand that, but you'll see weird images and you'll ask them 'why are you doing Maltese dogs in heaven?' And they'll say 'it's because my dogs just died.' And you're like 'oh my god, are you okay?'"
Of course, there's always that looming fear around AI replacing humans, but Holz has a much more positive outlook.
"We're not trying to build God, we're trying to amplify the imaginative powers of the human species," he says.
He makes it clear it's not about designing tech to replace people, it's about the "proliferation of the visual means to express yourself. It just means that people will become more visual in our culture, and more appreciative of those kinds of things. And there'll be more opportunity around that than there was before."
The barriers between consuming something and creating something fall away
I'm much in the same mind, and having come from a game art and design background I can certainly see its potential in idea generation for concept artists.
"Before you see video games being generated on the fly, you're gonna see the technologies being used for every step of the asset generation pipeline, to increase the creativity of the content, the quality of the content, and the amount of the content," says Holz. "...you're gonna have game studios using AI to help bake out lots of assets, textures, terrain, layouts and characters. Even if it takes ten minutes to make a high quality character, that's still much faster than it would take during the normal production process.
"One would hope that in ten years time there's no longer static content because everything is generated on the fly. So in theory, the barriers between consuming something and creating something fall away, and it becomes like liquid imagination flowing around the room.
"Everything between now and then is a combination of increasing the quality, being able to do things like 3D, making things faster, making things higher resolution, and having smaller and smaller chips doing more and more stuff."
So it seems we just have to wait for the technology to catch up. And it is catching up, fast. There are, albeit less powerful, AI image generators that run on consumer hardware (opens in new tab), and it's only a matter of time before these algorithms are even more efficient and involved, so we can get down to generating entire triple A games as we play.
See the article here:
One day consoles will have a 'giant AI chip and all the games will be dreams' - PC Gamer
Posted in Ai
Comments Off on One day consoles will have a ‘giant AI chip and all the games will be dreams’ – PC Gamer
‘Amped-up citizen science’ to save the world: Q&A with Conservation AI Hub’s Grant Hamilton – Mongabay.com
Posted: at 2:19 pm
Conservation apps have emerged in recent years as an efficient and cost-effective way to get citizens to monitor and document wildlife across the world. But an Australian initiative is going one step further.
In a bid to detect and save the countrys dwindling koala (Phascolarctos cinereus) population, Conservation AI Hub has, since the beginning of this year, been training volunteers in the state of Queensland to use infrared drones. The goal: to find koalas that are usually found curled high up in the trees. The initiative by the Queensland University of Technology (QUT) began earlier, in August 2021, with a small team operating the drones. They then analyzed the images collected using artificial intelligence algorithms. But as the project scaled up, the need to rope in volunteers became apparent.
Its an urgent mission. The deadly bushfires that blazed through large swathes of Australia in 2019 and 2020 decimated the countrys already vulnerable koala population: a 2021 report by the Australian Koala Foundation found that the country lost 30% of its koala population in the past three years. Earlier this year, the Australian government declared the species endangered in much of eastern Australia.
The situation is pretty dire, Grant Hamilton, director of Conservation AI Hub, tells Mongabay in a video interview. In the face of the climate and biodiversity crises, he says, its more essential than ever to get citizens involved in conservation efforts. And getting them access to and acquainted with technology is a good place to start.
Mongabays Abhishyant Kidangoor spoke with Hamilton on the work done by Conservation AI Hub, the role of conservation technology in a climate-ravaged future, and why its up to the common citizen to stand up for the planets biodiversity. The following interview has been lightly edited for style and clarity.
Mongabay: To start with, could you tell me where your interest in wildlife conservation stemmed from?
Grant Hamilton: I grew up in a small town in northern New South Wales in Australia, and I was always out in nature. At the time that I grew up, it was very much a conservation-minded kind of area. There were some particularly large protests when I was young that had to do with logging. And that really made an impact on me. Frankly, for a lot of people from my generation, there is a strong drive toward conservation that hasnt necessarily been translated through to the political will globally. But there are still a lot of people who are very devoted to it.
Mongabay: How did technology come to play a part in your work?
Grant Hamilton: I am a quantitative ecologist which means that I often create models to understand ecology. In the case of threatened species, one of the important things is simply finding where they are, and counting how many of them there are. Fundamentally, if we cant estimate how many animals there are of a particular species, then we dont know if the numbers are going down or up. We dont know if the time, energy and money we are investing into managing them is effective or not. And its surprisingly hard to find things. For koalas, in particular, which is where I did a reasonable amount of this work, they are really difficult to see. They are up in a canopy. Humans are down on the ground, looking through binoculars. Its a real challenge. There are some amazing people who do it really quite well. But even when they are doing it well, they are only getting to spotting two koalas for every three that are out there. And thats quite a large error rate. So part of my interest in technology was about how we think about doing things at a large scale and doing it more efficiently.
Mongabay: How do you think technology is faring when it comes to wildlife conservation?
Grant Hamilton: I think its on the increase. There is a recognition of the advantages. I think people are increasingly recognizing that the scale of the problem is not going to be solved simply by traditional tools. While I will never say that these technologies are the only solution, it would not be smart to ignore them. And it makes a lot of sense to try and expand them to find the areas where they are most practical, most efficient and can save us money.
Mongabay: Could you tell me how Conservation AI Hub works and how your team uses technology on the ground?
Grant Hamilton: Fundamentally, it is a data portal connected to a set of AI algorithms. The partners that we are working with will collect data in an appropriate way using appropriate technology. So that might be using camera traps, or it might be using drones, and they will feed that through to us. Then, we either use the algorithms that we have already developed, or we develop new algorithms, and work with those partners to get that data back to them. For example, after the Black Summer bushfires in Australia in 2019 and 2020, we did some work in Kangaroo Island in South Australia, where we were helping them to use drones to be able to detect koalas. Increasingly, we are also using camera traps.
Mongabay: What is the workflow after the data is collected?
Grant Hamilton: What happens next is that we run the data through an artificial intelligence algorithm to look for koalas. It is possible to get people to look through thermal imagery to look for koalas, but the challenge is that we might be looking at over 50 or 60 or 80 hectares [120-200 acres] at a single time and thats an enormous amount of thermal imagery data. So we use artificial intelligence to scan through all the data really quickly to be able to find the koalas. We then have a quality assurance phase to make sure that we are not getting too many false positives [koala sightings that arent actually koalas]. We then share the analyzed data so that it can be for everybody to benefit from, but only if it is appropriate.
Mongabay: What are the cases where sharing the data is not appropriate?
Grant Hamilton: There might be areas where perhaps Traditional Owners [Indigenous Australians with a spiritual or cultural connection to the land] dont consider that its appropriate for us to share data. Sometimes with threatened species, you dont actually want to let people know where those threatened species are. So, in the near term, there might be an embargo on such data, but the intention is always going to be to share that data.
Mongabay: Could you give me a ground-level example of a project Conservation AI Hub is working on currently?
Grant Hamilton: At the moment, what I am really excited about is one of the projects that the Conservation AI Hub is involved with, which is called the WildSeek. We are working together with Landcare Australia [a grassroots environmental organization], and we are training groups of volunteers to fly drones to collect data, which they send through to my research group at QUT. We then use artificial intelligence to detect koalas, and we deliver the detections back to them. As far as I know, this is a world first, this kind of training and application. The powerful and exciting thing about this is its the people who manage the land who are actually collecting the data, and then receiving the data back to help them manage better. We are aiming to have five groups on board within the next couple of months, but I see no reason why within two or three years, we may not have 20. Essentially if you are looking at how to scale conservation, I believe this to be a very effective model.
Mongabay: What has the impact on the ground been like?
Grant Hamilton: Its relatively new. We launched at the beginning of the year. For now, the model is the exciting bit the model of having volunteers that we are bringing on board to help with this. Often theres a disconnect between the people who monitor and the people who manage. But what we are doing now is amped-up citizen science which would enable folks on the ground to have better data to be able to manage better. We have one group that we have been working with in Noosa, which is a coastal town in southeast Queensland [state], where there are some really amazing koala habitats. But some of it is quite dense, and it can be quite difficult to get to. So what we have been doing is flying drones on these sites regularly to detect koalas across very large areas where it would have been really challenging for people to be able to do that. The other incredibly exciting thing is, as replanting is going on in that area, we are planning to continually resurvey to determine if koalas are moving into these replanted areas.
Mongabay: Before WildSeek was launched, how was the data collection done?
Grant Hamilton: Earlier, we would be the ones to go out and collect the data. We would send out drone operators from QUT to collect the data, and we would do essentially everything in-house. But it soon became apparent that if you want to scale this up in many countries, sending out a single team of drone operators wasnt enough. It is a very good proof of concept, but its not effective if you want to make broad-scale ground impacts. So thats how the idea developed to start working with volunteer groups, and to buy the equipment for them and train them so that we can then process the data for them.
Mongabay: What are the challenges you have faced in terms of using technology?
Grant Hamilton: There are a set of hurdles which are going to be issues with any monitoring method you use. That has to do with planning how you fly a drone. Are drones appropriate for the species that you are looking for? Are camera traps appropriate for the species that you are looking for? To fly drones, for example, if you are flying a reasonably sized drone, you need appropriate training. Its large, and you need to understand what the laws are in your area around flying drones. Being able to fly the drone is not enough. You need to be able to fly it in the appropriate way with an appropriate sensor. And theres a whole set of constraints around this: the height, the speed, and the size of the organism that youre looking for. But we have managed to work this out over a number of years for different kinds of species. There is a considerable technical challenge in ingesting data across networks from a long way away. You might get interruptions and you need to make sure that that data still gets there whole. In each step along the way, there are going to be site-specific hurdles. There will always be issues, but thats part of the fun. And hopefully, we will resolve them in a way that contributes better to conservation.
Mongabay: If you were to chat with someone who develops technology for conservation, what would you tell them are the technological gaps you face?
Grant Hamilton: Drones are just a platform to carry the real thing that we are interested in, and thats the sensor that collects the data. There are some really amazing sensors currently, but they are very expensive, and some of them are quite heavy that you might need to fly them in a helicopter, rather than on a drone. So what I hope and would expect to see coming along is more high-resolution and more lightweight sensors. But low cost is going to be a very important thing, particularly if we are talking about empowering citizens to go and collect data. Finding ways to enable people to engage with this, and really to democratize conservation is really important. Partly, you are always going to have to work with folks like us, because the development of quantitative analytical tools and machine-learning tools takes a fair bit of training. But the idea is about data collection, and using people who are out there. There are a lot of people passionate about conservation, and they want to get involved. So marrying the technology with those people who want to help will be an incredibly important thing for the folks who make this technology. If they can step up in that way, that would be a huge boost.
Mongabay: Where do you see Conservation AI Hub going from here?
Grant Hamilton: We want to help to save the world. People are focusing, quite rightly, on the fact that we are facing a climate crisis. But we are also facing a biodiversity crisis. What people dont often recognize is that when biodiversity goes, so do we. I would argue that ecosystems have an innate right to persist. It shouldnt all be about the utility for human beings.
What we are trying to do is to get that first step of monitoring, so that we can point out where the problems are and determine if the management taking place is being done effectively. We want to be able to hold up those areas where things are being done badly. Its going to be up to us common citizens to go out there and stand up for biodiversity.
Mongabay: In a rapidly heating world which is also facing a biodiversity crisis, what role do you think technology can play?
Grant Hamilton: There has been a set of skills around wildlife observation and conservation. Theres a learning curve, which is potentially hard for people. If we can lower that bar to enable them to collect data in their surrounding environment, in the place where they are invested in, the area where they live, if we can lower that bar to allow them to help, that has to be a good thing. That is partly what conservation technology is going to do. It is going to allow us to record and see where things are going wrong. Perhaps occasionally, we might get wins and we might see things that are going well and we might learn from those successes. But certainly, if we allow people to engage with conservation technology, help them to understand what it means that awareness, I believe, has to be a good thing for biodiversity. Especially if it translates to the political process and helps to put pressure on politicians, organizations and companies to do better.
Follow this link:
Posted in Ai
Comments Off on ‘Amped-up citizen science’ to save the world: Q&A with Conservation AI Hub’s Grant Hamilton – Mongabay.com
Three Methods Researchers Use To Understand AI Decisions – RTInsights
Posted: at 2:19 pm
Making sense of AI decisions is important to researchers, decision-makers, and the wider public. Fortunately, there are methods available to ensure we know more.
Deep-learning models, of the type that are used by leading-edge AI corporations and academics, have become so complex that even the researchers that built the models struggle to understand decisions being made.
This was shown most clearly to a wide audience during DeepMinds AlphaGo tournament, in which data scientists and pro-Go players were regularly bamboozled by the AIs decision-making during the game, as it made unorthodox plays which were not considered the strongest move.
SEE ALSO: Artificial Intelligence More Accepted Post-Covid According to Study
In an attempt to better understand the models they build, AI researchers have developed three main explanation methods. These are local explanation methods, which explain one specific decision, rather than the decision making for an entire model, which can be challenging given the scale.
Yilun Zhou, a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL), discussed these methods in a MIT News article.
Feature attribution
With feature attribution, an AI model will identify which parts of an input were important to a specific decision. In the case of an x-ray, researchers can see a heatmap or the individual pixels that the model perceived as most important to making its decision.
Using this feature attribution explanation, you can check to see whether a spurious correlation is a concern. For instance, it will show if the pixels in a watermark are highlighted or if the pixels in an actual tumor are highlighted, said Zhou.
Counterfactual explanation
When coming to a decision, the human on the other side may be confused as to why an AI has decided one way or the other. As AI is being deployed in high-stakes environments, such as in prisons, insurance, or mortgages, knowing why an AI rejected an offer or appeal should help them attain approval the next time they apply.
The good thing about the [counterfactual] explanation method is it tells you exactly how you need to change the input to flip the decision, which could have practical usage. For someone who is applying for a mortgage and didnt get it, this explanation would tell them what they need to do to achieve their desired outcome, said Zhou.
Sample importance
Sample importance explanation requires access to the underlying data behind the model. If a researcher notices what they perceive to be an error, they can run a sample importance explanation to see if the AI was fed data that it couldnt compute, which led to an error in judgment.
See the original post here:
Three Methods Researchers Use To Understand AI Decisions - RTInsights
Posted in Ai
Comments Off on Three Methods Researchers Use To Understand AI Decisions – RTInsights
AI-powered delivery automation streamlined for the enterprise – VentureBeat
Posted: at 2:19 pm
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
The last two years saw a major rise in home deliveries. Around 60% of the global population will live in cities by 2030 and last-mile deliveries are expected to grow further by 78%, according to the World Economic Forum.
Currently, the global last-mile delivery market size stands at $130 billion, with 4160 parcels shipped every second. The number comes to about 150 billion annually a number that is always rising owing to ecommerce growth and changing consumer behavior according to Pitney Bowes Parcel Shipping Index. All major organizations like McDonalds, Starbucks, Walmart and UPS are investing heavily to focus on home deliveries and provide the best digital delivery experience.
Today, its possible to order everything online and companies have to be prepared to fulfill these customer demands. Both business-to-consumer (B2C) and business-to-business (B2B) shipments have grown significantly, and customers expect the best delivery experience.
LogiNext, a New York-based technology logistics automation company for large enterprises, claims its proprietary artificial intelligence (AI)-powered logistics management solutions provide a software-as-a-service (SaaS)-based solution to help enterprises digitize, optimize and automate end-to-end logistics operations. With predictive ETAs, the platform kicks into action even before a consumer places an order online and helps a brand orchestrate deliveries until it reaches the doorstep (including handling returns).
MetaBeat 2022
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Dhruvil Sanghvi, CEO at LogiNext, told VentureBeat that LogiNexts platform has tracked more than 13 billion location data points, supporting and processing more than 500 million orders annually. According to Sanghvi, LogiNexts platform enables clients to deliver faster, more efficiently offering clear-cut insights into the best possible routes for drivers and giving real-time visibility over the entire logistical operations.
Sanghvi noted that traditional brands aiming to offer home deliveries like digital natives have a few hurdles to cross.
1. Lack of digitization: Several large organizations have logistics operations heavily dependent on manual tasks. Digitizing processes from inventory management to collecting e-signatures to analytics and reporting is critical for smooth home delivery operations.
2. Legacy technology systems: While many companies use some technology, they are often legacy on-premise systems that are expensive to install and maintain. New age SaaS platforms can easily replace legacy systems and give operations managers a great digital experience to manage deliveries.
3. Driver management: There are orders coming in from several channels, and organizations rely on in-house drivers or a third-party fleet. Managing all of this on Excel sheets is highly inefficient and leads to increasing delivery costs.
Sanghvi said what LogiNext does is heavy on artificial intelligence (AI). The SaaS platform helps enterprises deliver in a smarter manner and, hence, faster.
For our algorithm to assign orders and figure out the best routes automatically, it must consider things like traffic patterns, weather conditions, transport regulations, peak hours, customer preference and several other variables, he said.
To help enterprises optimize logistics, LogiNexts AI and ML capabilities are helpful for route optimization, capacity utilization, dynamic order allocation and visibility over the entire supply chain.
One of the foremost advantages of using LogiNext is digitization. There are still a lot of manual processes in the logistics industry which first need to get digitized and then move towards automation. The more historic and current data we have, the more were able to use AI to optimize logistics delivery, he said.
According to Sanghvi, home deliveries for food and beverage (F&B), retail, ecommerce and courier companies are the largest segments adopting logistics automation to make last mile deliveries more efficient.
One of the major challenges in the enterprise ecosystem today is unstructured data. Sanghvi said many supply chain companies still handle a lot of their processes manually, collecting data in Excel sheets making it difficult to contextualize data. He said LogiNext helps to digitize companies entire systems to ensure they are collecting, labeling and using data in the best possible way.
To achieve true success in their digital transformation journey, Sanghvi said organizations must prioritize automation, making processes more efficient. Logistics has a huge scope for this.
Digital transformation and automating processes are two huge impact areas we help companies with, said Sanghvi who highlighted how one of LogiNexts major customers in North America reduced the order life cycle of a product from 12 hours to 2.2 hours.
While Sanghvi considers the likes of Project 44 and Bringg as LogiNexts major competitors, he said LogiNext has a differentiator in the fact that it is an end-to-end platform with elements of visibility and routing. With 100+ out-of-the-box API integrations, companies can plug and play to go live quickly, as they dont need to revamp any of their existing technology. The LogiNext platform sits smoothly over any existing technology through the APIs and webhooks.
He said another differentiator is LogiNexts ability to provide end users with a superior customer experience where the end customer is communicated ETAs in real time with contextual alerts & notifications.
A lot of our competitors often focus on one particular thing either they focus only on visibility or only on routing. We focus on all aspects of the delivery process, from the first mile to the last mile. So, you can think of us more as horizontal players rather than vertical players, said Sanghvi.
Founded in 2014 by CEO Dhruvil Sanghvi, LogiNext is headquartered in New York, with the companys founding team and business development teams in New Jersey. While LogiNexts engineering and research teams are based in Mumbai, the company has small offices in Jakarta, Singapore and Dubai.
Growing at 120% year-over-year and with a valuation of between $300-$500 million, LogiNext says it aims to go for its initial public offering (IPO) in the next five to seven years, building a global company with cutting-edge tech for logistics, especially in the home delivery and last-mile segment.
LogiNext has a current headcount of about 200 and has raised $49.5 million in total funding to date from investors like Tiger Global, Steadview Capital and Alibaba group of companies. The company currently serves 200+ enterprise clients in over 50 countries globally, including companies like McDonalds, KFC, BurgerKing, Decathlon, SingaporePost, among others.
Sanghvi said LogiNext provides its software to companies who want to compete with Amazon on their delivery experience. The company has been named a sample vendor in Gartners Hype Cycle on many occasions, and also received Frost and Sullivans 2022 Customer Value Leadership Award.
See the original post here:
AI-powered delivery automation streamlined for the enterprise - VentureBeat
Posted in Ai
Comments Off on AI-powered delivery automation streamlined for the enterprise – VentureBeat
Researchers announce new AI-based technology that can create short videos based on single images – TechSpot
Posted: at 2:19 pm
Why it matters: Researchers continue to find new ways to leverage artificial intelligence and machine learning capabilities as the technologies evolve. Earlier this week, Google scientists announced the creation of Transframer, a new framework with the ability to generate short videos based on singular image inputs. The new technology could someday augment traditional rendering solutions, allowing developers to create virtual environments based on machine learning capabilities.
The new framework's name (and, in some ways, concept) are a nod to another AI-based model known as Transformer. Originally introduced in 2017, Transformer is a novel neural network architecture with the ability to generate text by modeling and comparing other words in a sentence. The model has since been included in standard deep learning frameworks such as TensorFlow and PyTorch.
Just as Transformer uses language to predict potential outputs, Transframer uses context images with similar attributes in conjunction with a query annotation to create short videos. The resulting videos move around the target image and visualize accurate perspectives despite having not provided any geometric data in the original image inputs.
The new technology, demonstrated using Google's DeepMind AI platform, functions by analyzing a single photo context image to obtain key pieces of image data and generate additional images. During this analysis, the system identifies the picture's framing, which in turn helps the system to predict the picture's surroundings.
The context images are then used to further predict how an image would appear from different angles. The prediction models the probability of additional image frames based on the data, annotations, and any other information available from the context frames.
The framework marks a huge step in video technology by providing the ability to generate reasonably accurate video based on a very limited set of data. Transframer tasks have also shown extremely promising results on other video-related tasks and benchmarks such as semantic segmentation, image classification, and optical flow predictions.
The implications for video-based industries, such as game development, could be potentially huge. Current game development environments rely on core rendering techniques such as shading, texture mapping, depth of field, and ray tracing. Technologies such as Transframer have the potential to offer developers a completely new development path by using AI and machine learning to build their environments while reducing the time, resources, and effort needed to create them.
Image credit: DeepMind
More here:
Posted in Ai
Comments Off on Researchers announce new AI-based technology that can create short videos based on single images – TechSpot
Insilico Medicine presents on AI for drug discovery at 9th Annual Aging Research and Drug Discovery Conference – EurekAlert
Posted: at 2:19 pm
image:Quentin Vanhaelen, PhD, of Insilico Medicine will present on the Company's AI-powered target discovery platform PandaOmics at the 9th annual Aging Research and Drug Discovery conference. view more
Credit: c/o Insilico Medicine
The 9th annual Aging Research and Drug Discovery Conference, happening Aug. 29-Sept. 2 at the University of Copenhagen, will feature researcher Quentin Vanhaelen, PhD from Insilico Medicine discussing the Companys artificial intelligence target discovery platform, PandaOmics. The platform has been validated through numerous drugs in development, including thefirstAI-discovered and AI-designed drug for idiopathic pulmonary fibrosis,currently in Phase I trials. Vanhaelen is one of over 70academic and industry leaders in longevity speaking at the event, and will describe theprogress that PandaOmics has made since Insilico launched it in 2020.
PandaOmics uses aging as an important biomarker, sifting through trillions of data points from clinical trials, research grants, and omics data samples, including transciptomics, genomics, epigenomics, and proteomics. The system identifies where aging and disease intersect and how aging contributes to poorer health. Researchers have used PandaOmics to predict ninemolecular targets for new drugs that can combat aging as well as aging-associated diseases including Alzheimers, Parkinsons, cirrhosis, and rheumatoid arthritis.
Vanhaelen will discuss the possibilities of developing dual-purpose drugs using AI, and talk about how the PandaOmics platform works using data to discover where age-associated and non-age-associated diseases overlap, and evaluating targets' therapeuticpotential by ranking them for druggability and safety. And Vanhaelen will talk about the benefits of an AI approach, which allows scientists to make these discoveries in record time. The nine novel dual-purpose aging and disease targets were discovered and published in less than two months.
Identifying ways to identify and treat aging is one of the many topics being explored relative to expanding human lifespan and healthspan at the ARDD. The conference, founded by Insilico Medicine founder and CEO Alex Zhavoronkov, PhD, brings together experts in longevity to share breakthroughs, collaborate, and advance aging research.
This conference was designed to create the worlds first platform for the pharmaceutical industry to actively engage in and incorporate the latest discoveries in credible aging research into every aspect of their internal R&D strategy, says Zhavoronkov.
Vanhaelen has been working with Insilico since 2016 and holds a PhD in theoretical physics from the University of Brussels. He founded a consultancy company called Insilicoscreen on the analysis of signaling pathways dynamics that was acquired by Insilico Medicine and his research interests includetheories of aging, signaling pathways activation, modeling of dynamical systems, and applications of deep learning techniques for drug discovery.
He will present on PandaOmics on September 2, 7:40-8pm EST.
About the ARDD
The 9th annual Aging Research and Drug Discovery (ARDD) conference brings together leading academic and industry speakers in aging research with prominent startups, venture capitalists, and editors of industry journals. The event will be held virtually and in person at the University of Copenhagen Aug. 29-Sept. 2.
Details and registration: http://www.agingpharma.org/
About Insilico Medicine
Insilico Medicine, a clinical stage end-to-end artificial intelligence (AI)-driven drug discovery company, is connecting biology, chemistry, and clinical trials analysis using next-generation AI systems. The company has developed AI platforms that utilize deep generative models, reinforcement learning, transformers, and other modern machine learning techniques to discover novel targets and to design novel molecular structures with desired properties. Insilico Medicine is delivering breakthrough solutions to discover and develop innovative drugs for cancer, fibrosis, immunity, central nervous system (CNS) diseases and aging-related diseases.
For more information, visit http://www.insilico.com
For media inquiries, contact media@insilicomedicine.com
News article
Not applicable
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.
Go here to read the rest:
Posted in Ai
Comments Off on Insilico Medicine presents on AI for drug discovery at 9th Annual Aging Research and Drug Discovery Conference – EurekAlert
Death, resurrection and digital immortality in an AI world – VentureBeat
Posted: at 2:19 pm
Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
I have been thinking about death lately. Not a lot a little. Possibly because I recently had a month-long bout of Covid-19. And, I read a recent story about the passing of the actor Ed Asner, famous for his role as Lou Grant in The Mary Tyler Moore Show. More specifically, the story of his memorial service where mourners were invited to talk with Asner through an interactive display that featured video and audio that he recorded before he died. The experience was created by StoryFile, a company with the mission to make AI more human. According to the company, their proprietary technology and AI can match pre-recorded answers with future questions, allowing for a real-time yet asynchronous conversation.
In other words, it feels like a Zoom conversation with a living person.
Even though the deceased is materially gone, their legacy appears to live on, allowing loved ones, friends, and other interested parties to interact with them. The company has also developed these experiences for others, including the still very much alive William Shatner. Through this interactive experience, I asked Shatner if he had any regrets. He then spoke at length about personal responsibility, eventually coming back to the question (in Shatner-like style). The answer, by the way, is no.
There are other companies developing similar technology such as HereAfter AI. Using conversational AI, the company aspires to reinvent remembrance, offering its clients digital immortality. This technology evolved from an earlier chatbot developed by a son hoping to capture his dying fathers memories.
MetaBeat 2022
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
It is easy to see the allure of this possibility. My father passed away ten years ago, shortly before this technology was available. While he did write a short book containing some of his memories, I wish I had hours of video and audio of him talking about his life that I could query and both see and hear the responses in his own voice. Then, in some sense, he would seem to still be alive.
This desire to bring our deceased loved ones back to life is understandable as a motivation and helps to explain these companies and their potential. Another company is ETER9, a social network set up by Portuguese developer Henrique Jorge. He shared the multi-generational appeal of these capabilities: Some years from now, your great-grandchildren will be able to talk with you even if they didnt have the chance to know you in person.
In Be Right Back, an episode from the Netflix show Black Mirror, a woman loses her boyfriend in a car accident and develops an attachment to an AI-powered synthetic recreation. This spoke to the human need for love and connection.
In much the same way, a young man named Joshua who lost his girlfriend Jessica to an autoimmune disease recreated her presence through a text-based bot developed by Project December using OpenAIs GPT-3 large language transformer. He provided snippets of information about Jessicas interests and their conversations, as well as some of her social media posts.
The experience for Joshua was vivid and moving, especially since the bot said exactly the sort of thing the real Jessica would have said (in his estimation). Moreover, interacting with the bot enabled him to achieve a kind of catharsis and closure after years of grief. This is more remarkable since he had tried therapy and dating without significant results; he still could not move on. In discussing these bot capabilities, Project December developer Jason Rohrer said: It may not be the first intelligent machine. But it kind of feels like its the first machine with a soul.
It likely will not be the last. For example, Microsoft announced in 2021 that it had secureda patent for software that could reincarnate people as a chatbot, opening the door to even wider use of AI to bring the dead back to life.
Weve got to verify it legally
To see if she is morally, ethically
Spiritually, physically
Positively, absolutely
Undeniably and reliably dead!
In the novel Fall; or, Dodge in Hell, author Neal Stephenson imagines a digital afterlife known as Bitworld contrasting the here and now of Meatworld. In the novel, the tech industry eventually develops the ability to map Dodges brain through precise scanning of the one hundred billion neurons and seven hundred trillion synaptic connections humans have, upload this connectome to the cloud and somehow turn it on in a digital realm. Once Dodges digital consciousness is up-and-running, thousands of other souls who have died in Meatworld join the evolving AI-createdlandscape that becomes Bitworld. Collectively, they develop a digital world in which these souls have what appears as consciousness and a form of tech-fueled immortality, a digital reincarnation.
Just as the technology did not exist ten years ago to create bots that virtually maintain the memories and to a degree the presence of the deceased, today the technology does not exist to create a human connectome or Bitworld. According to Louis Rosenberg of Unanimous A.I.: This is a wildly challenging task but is theoretically feasible.
And people are working on these technologies now through the ongoing advances in AI, neurobiology, supercomputing, and quantum computing.
Neuralink, a company founded by Elon Musk focused on brain-machine interfaces, is working on aspects of mind-uploading. Some number of wealthy people, including tech entrepreneur Peter Thiel, have reportedly arranged to have their bodies preserved after death until such time as the requisite technology exists. Alcor is one such organization offering this preservation service. As futurist and former Alcor CEO Max Moore said: Our view is that when we call someone dead its a bit of an arbitrary line. In fact, they are in need of a rescue.
The mind-uploading concept is also explored in the Amazon series Upload, in which a mans memories and personality are uploaded into a lookalike avatar. This avatar resides in what passes for an eternal digital afterlife in a place known as Lakeview.In response, an Engadget article asked: Even if some technology could take all of the matter in your brain and upload it to the cloud, is the resulting consciousness stillyou?
This is one of many questions, but ultimately may be the most relevant and one that likely cannot be answered until the technology exists.
When might that be? In the same Engadget article, Upload showrunner Greg Daniels implies that the ability to upload consciousness is all about information in the brain, noting that it is a finite amount, albeit a large amount. And if you had a large enough computer, and a quick enough way to scan it, you ought to be able to measure everything, all the information thats in someones brain.
The ethical questions this raises could rival the connectome in number and will become critical much sooner than we think.
Although in the end, I would just like to talk with my dad again.
Gary Grossman is the senior VP of technology practice atEdelmanand global lead of the Edelman AI Center of Excellence.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even considercontributing an articleof your own!
Read More From DataDecisionMakers
Read more:
Death, resurrection and digital immortality in an AI world - VentureBeat
Posted in Ai
Comments Off on Death, resurrection and digital immortality in an AI world – VentureBeat