Daily Archives: February 5, 2022

Liftoff lifts off: Area15s newest ride has view of Strip and beyond – Las Vegas Review-Journal

Posted: February 5, 2022 at 5:09 am

Anticipation hangs in the air no, wait, thats our breath that were seeing suspended before us. Its cold, but things are heating up.

Its like waiting for fireworks, a bundled-up blonde woman observes in front of Area15 on Wednesday evening, noting the Fourth of July-like sense of expectancy.

Those fireworks come soon enough, sparks-spewing pyrotechnics countering the wind-driven night chill. And with that, its go time.

Goggles-sporting guides lead the way. The illuminated obelisk beckons, cradled in a steel tower criss-crossed with illuminated spires. Liftoff is ready for lift off.

Its Wednesday night, and Area15s new open-air balloon ride is getting its official launch.

Think of it as a steampunk Around the World in 80 Days, with craft cocktails and 360-degree panoramic views of the Strip in place of transcontinental flight.

At the base of the attraction is a themed bar made to look like a lost desert hang for wayward aviators.

Theres antiquities here: A typewriter rests on one table; encyclopedias are strewn about, a nod to those long-lost days of manual research, when learning the genus of an ostrich wasnt as simple as hitting the Google button on the internet machine.

The point?

To make the past feel indivisible from the present.

It is meant to evoke the last place some aeronauts and astronauts drank and dreamed of building a tower, explains Michael Beneville, Area15s chief creative officer, addressing the crowd as Liftoff welcomes its first riders. And they did.

Beverage in hand, its time to board.

Building a mystery

He remembers standing in this space when thats all it was: open space.

Michael Beneville recalls surveying the then-barren parcel of land that would eventually become Area15 with Winston Fisher, CEO for Area15, years back.

Winston and I stood in this parking lot with our two respective teams, it was an empty lot that stretches from the highway to Palace Station, and we thought, What would we put here? says Beneville, cald in a disco-ball shiny silver suit. This was born out of its proximity to the Strip, which is close, but thats the other side of the moon in terms of the natural traffic flow of a visitor.

What would make them go across the highway? he continues. And we thought the only thing that would actually do that would be a geunine curiosity about, What the hell is that?

Area15 was built on these kind of open-ended questions.

Since opening in September 2020, the complex has featured a number of attractions, like Meow Wolfs immersive art experience Omega Mart and the winding maze of booze that is Lost Spirits, that pointedly leave something to the imagination, that keep visitors guessing about what it is theyre taking in, exactly and in doing so, ideally keeps them coming back.

Its worked so far.

Not only does curiosity kill cats, it also lines coffers: In its first full year of operation, Area15 drew nearly two million visitors, with the smash success of Omega Mart leading the way, drawing over 800,000 guests alone.

Liftoff is the latest addition to Area15s arsenal of the far-out.

While certainly packing more straight-forward kicks than puzzle-piecing together the 60-plus experiences that comprise Omega Mart into a coherent whole what isnt? the core idea is the same, that when youre up there in the sky looking this way and that way, youre forming your own narrative of what it is youre choosing to see.

I think its ironic, Beneville notes, that a lot of places that are about imagination dont leave much to the imagination, you know?

Seven minutes in the heavens

3-2-1 liftoff! everyone says in unison, voices elevated like the rest of us will be soon enough.

Seat belts clicked into place, cell phones confined to see-through cases worn around the neck so that they wont be dropped, the steady ascension begins.

Its a gradual climb, allowing any clammy-palmed acrophobes to keep a leash on their nerves.

People chatter; teeth chatter.

Dance music plays as pink and blue lights pulsate.

At the top of the attraction, the 16-seat gondola spins slowly; you can take everything in without turning your head.

Its an observation deck of sorts, Beneville explains.

You can see for miles in all directions mountains to the west; man-made grandeur to the east your feet dangling above the parking lot 13-stories below.

Its a mix of sophistication and simplicity, a space-age structure housing hot air balloon technology that dates back nearly 250 years; a futuristic ride posited on the primal thrill of being really, really high up in the air.

Seven minutes later, were back on the ground.

The rides over; the rides have just begun.

Its not a billion dollar thing, Beneville says of Liftoff, which is now open to the public. Its actually some cinder blocks, some cool art, some couches that are painted and a helluva cool ride.

Contact Jason Bracelin at jbracelin@reviewjournal.com or 702-383-0476. Follow @jbracelin76 on Instagram

Here is the original post:

Liftoff lifts off: Area15s newest ride has view of Strip and beyond - Las Vegas Review-Journal

Posted in Las Vegas | Comments Off on Liftoff lifts off: Area15s newest ride has view of Strip and beyond – Las Vegas Review-Journal

AI, the brain, and cognitive plausibility – TechTalks

Posted: at 5:09 am

By Rich Heimann

This article is part of the philosophy of artificial intelligence, a series of posts that explore the ethical, moral, and social implications of AI today and in the future.

Is AI about the brain?

The answer is often, but not always. Many insiders and most outsiders believe that if a solution looks like a brain, it might act as the brain. If a solution acts like a brain, then the solution will solve other problems like humans solve other problems. What insiders have learned is that solutions that are not cognitively plausible teach them nothing about intelligence or at least nothing more than before they started. This is the driving force behind connectionism and artificial neural networks.

That is also why problem-specific solutions designed to actually play to their strengthsstrengths that are not psychologically or cognitively plausiblefall short of artificial intelligence. For example, Deep Blue is not real AI because it is not cognitively plausible and will not solve other problems. The accomplishment, while profound, is an achievement in problem-solving, not intelligence. Nevertheless, chess-playing programs like Deep Blue have shown that the human mind can no longer claim superiority over a computer on this task.

Lets consider approaches to AI that are not based on the brain but still seek cognitive plausibility. Shane Legg and Marcus Hutter are both a part of Google DeepMind. They explain the goal of artificial intelligence as an autonomous, goal-seeking system; [for which] intelligence measures an agents ability to achieve goals in a wide range of environments.

This definition is an example of behaviorism. Behaviorism was a reaction to 19th-century philosophy of the mind which focused on the unconscious, and psychoanalysis, which was ultimately challenging to test experimentally. John Watson, professor of psychology at John Hopkins University, spearheaded the scientific movement in the first half of the twentieth century. Watsons 1913 Behaviorist Manifesto sought to reframe psychology as a natural science by focusing only on observable behaviorhence the name.

Behaviorism aims to predict human behavior by appreciating the environment as a determinant of that behavior. By concentrating only on observable behavior and not the origin of the behavior in the brain, behaviorism became less and less a source of knowledge about the brain. In fact, to the behaviorist, intelligence does not have mental causes. All the real action is in the environment, not the mind. Ironically, DeepMind embraces the philosophy of operant conditioning, not the mind.

In operant conditioning, also known as reinforcement learning, an agent learns that getting a reward depends on action within its environment. The behavior is said to have been reinforced when the action becomes more frequent and purposeful. This is why DeepMind does not define intelligence: it believes there is nothing special about it. Instead, intelligence is stimulus and response. While an essential component of human intelligence is the input it receives from the outside world, and learning from the environment is critical, behaviorism purges the mind and other internal cognitive processes from intellectual discourse.

This point was made clear in a recent paper by David Silver, Satinder Singh, Doina Precup, and Richard Sutton from DeepMind titled Reward is Enough. The authors argue that maximizing reward is enough to drive behavior that exhibits most if not all attributes of intelligence. However, reward is not enough. The statement itself is simplistic, vague, circular, and explains little because the assertion is meaningless outside highly structured and controlled environments. Besides, humans do many things for no reward at all, like writing fatuous papers about rewards.

The point is that suppose you or your team talk about how intelligent or cognitively plausible your solution is? I see this kind of solution arguing quite a bit. If so, you are not thinking enough about a specific problem or the people impacted by that problem. Practitioners and business-minded leaders need to know about cognitive plausibility because it reflects the wrong culture. Real-world problem solving solves the problems the world presents to intelligence whose solutions are not ever cognitively plausible. While insiders want their goals to be understood and shared by their solutions, your solution does not need to understand that it is solving a problem, but you do.

If you have a problem to solve that aligns with a business goal and seek an optimal solution to accomplish that goal, then how cognitively plausible some solution is, is unimportant. How a problem is solved is always secondary to if a problem is solved, and if you dont care how, you can solve just about anything. The goal itself and how optimal a solution is for a problem are more important than how the goal is accomplished, if the solution was self-referencing, or what a solution looked like after you didnt solve the problem.

About the author

Rich Heimann is Chief AI Officer atCybraics Inc,a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that exploreswhat AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his bookhere.

See the rest here:

AI, the brain, and cognitive plausibility - TechTalks

Posted in Ai | Comments Off on AI, the brain, and cognitive plausibility – TechTalks

The US can compete with China in AI education here’s how | TheHill – The Hill

Posted: at 5:09 am

The artificial intelligence (AI) strategic competition with China is more intense than ever. To many, the stakes have never been higher who leads in AI will lead globally.

At first glance, China appears to be well-positioned to take the lead when it comes to AI talent. China is actively integrating AI into every level of its education system, while the United States has yet to embrace AI education as a strategic priority. This will not do.To maintain its competitive edge, the United States must adopt AI education and workforce policies that are targeted and coordinated. Such policies must also increase AI-specific federal investment and encourage industry partnerships.

Upon first glance, the state of U.S. AI education appears to be on a positive trajectory. Recent years have seen a proliferation of AI education materials outside the classroom: a rise in online AI education programs at all levels, including K-12 summer camps, boot camps, and a range of certificates and industry-academia partnerships.Nearly 300different organizations now offer AI or computer science summer camps to K-12 students. Other K-12 learning opportunities include after-school programs, competitions and scholarships, including explicit outreach to underrepresented groups in computer science education to address race and gender disparities.

However, the reach and effectiveness of these piecemeal efforts tell a different story. There are no standardization or quality benchmarks for the maze of online offerings or data on reach. Moreover, outside of a handful of schools, very little AI education is happening in the classroom. Integrating any new education into classrooms is notoriously slow and difficult, and AI education will be no exception. If anything, it faces an even steeper uphill battle as schools across the country are in a constant struggle over competing priorities.

Meanwhile, Chinas rollout and scale of AI education dramatically eclipse U.S. initiatives. While it is too early to assess the effectiveness and quality of Chinas AI education programs, our research at Georgetown Universitys Center for Security and Emerging Technology (CSET)revealsthat Chinas Ministry of Education is rapidly implementing AI curricula across all education levels and has evenmandated high schools to teach AI courseworksince 2018. In Beijing, as well as Zhejiang and Shandong provinces, education authorities haveintegratedPython into the notoriously difficultGaokaocollege entrance exam.

At the postsecondary level, Chinas progress appears even more impressive. In 2019, the Ministry of Educationstandardizedan undergraduate AI major, which today is offered at 345 universities and has been the most popular new major in China. Additionally, our tally indicates at least 34 universities have AI institutes that often train both undergraduate and graduate students, and research areas such as natural language processing, robotics, medical imaging, smart green technology and unmanned systems. The U.S. has a world-class university system, but AI majors in large part remain a specialization of computer science.

The U.S. education system is not designed to operate like Chinas. Nor should it be. There are inherent advantages in a system that allows for a greater degree of educational autonomy. This gives breathing room for experimentation, creativity and innovation among U.S. educational institutions and opens doors for collaboration with the local community, private sector, philanthropic organizations and other relevant stakeholders.

But for experimental AI education initiatives to be successful, they must be evaluated and scaled inclusively throughout the education system. In this context, the decentralized nature of the U.S. education systems can pose a challenge curricula, teacher training and qualifications and learning standards are all fragmented by different state approaches.

For instance, computer science coursework is currently available at51 percentof U.S. high schools but unlike in China, is not required in most cases. Initiatives are cropping up in various schools around the country, but a lack of coordination delivering comprehensive awareness, cross-state collaboration and shared assessment metrics hinder these nascent programs from having a nationwide, widespread impact on AI education.

Implementing competitive AI education across the United States is no easy task there are no shortcuts and no single solution. There are, however, two elements that education leaders and policymakers should prioritize: coordination and investment.

For coordination at the federal level, one path forward is through the White HousesNational Artificial Intelligence Initiative Office for Education and Training, which can help coordinate AI education, training and workforce development policy across the country. At the same time, community and state-level engagement to implement, evaluate and scale AI education initiatives are likely to be just as important as federal efforts.

For example, the Rhode Island Department of Elementary and Secondary Education is leveraging partnerships with private universities and nonprofits to strengthen its K-12 computer science initiative. Results are starting to show promise: There has been a17-fold increasein advanced placement computer science exams taken since 2016; however, this still represents a small fraction of the overall student body.

Adequate and diversified investment in AI education is also essential. Federal funding can helpclose accessibility gapsbetween states. To that end, Congress can appropriate funding for states to provide public K-12 students with AI experiential learning opportunities and K-12 educators with the required training and support. State and local governments can also fund teacher training initiatives to encourage more educators to become certified in computer science or offer ongoing professional development. Concurrently, funding from nonprofit and private sectors can complement federal, state-level and local investments.

Ultimately, successful AI education implementation and adoption will be a national endeavor requiring participation from federal, state and local governments, as well as nonprofits, academia and industry. Coordination within the education ecosystem will help to spur ideas and initiatives.

For those touting U.S. innovation as a competitive strength vis--vis China, it should be nothing less.

KaylaGoodeis a research analyst at Georgetown Universitys Center for Security and Emerging Technology (CSET), where she works on the CyberAI Project.

Dahlia Peterson is a research analyst at Georgetown Universitys Center for Security and Emerging Technology (CSET). Follow her on Twitter@dahlialpeterson.

See the original post:

The US can compete with China in AI education here's how | TheHill - The Hill

Posted in Ai | Comments Off on The US can compete with China in AI education here’s how | TheHill – The Hill

Here’s What Henry Kissinger Thinks About the Future of Artificial Intelligence – Gizmodo

Posted: at 5:09 am

Photo: Adam Berry (Getty Images)

One of the core tenants running throughout The Age of AI is also, undoubtedly, one of the least controversial. With artificial intelligence applications progressing at break-neck speed, both in the U.S. and other tech hubs like China and India, government bodies, thought leaders, and tech giants have all so far failed to establish a common vocabulary or a shared vision for whats to come.

As with most issues discussed in The Age of AI, the stakes are exponentially higher when the potential military uses for AI enter the picture. Here, more often than not, countries are talking past each other and operating with little knowledge of what the other is doing. This lack of common understanding, Kissinger and Co. wager, is like a forest of bone-dry kindling waiting for an errant spark.

Major countries should not wait for a crisis to initiate a dialogue about the implicationsstrategic. doctrinal, and moralof these [AIs] evolutions, the authors write. Instead, Kissinger and Schmidt say theyd like to see an environment where major powers, both government and business, pursue their competition within a framework of verifiable limits.

Negotiation should not only focus on moderating an arms race but also making sure that both sides know, in general terms, what the other is doing. In a general sense, the institutions holding the AI equivalent of a nuclear football have yet to even develop a shared vocabulary to begin a dialogue.

See the rest here:

Here's What Henry Kissinger Thinks About the Future of Artificial Intelligence - Gizmodo

Posted in Ai | Comments Off on Here’s What Henry Kissinger Thinks About the Future of Artificial Intelligence – Gizmodo

Can you trust AI to protect AI? – VentureBeat

Posted: at 5:09 am

Join today's leading executives online at the Data Summit on March 9th. Register here.

Now that AI is heading into the mainstream of IT architecture, the race is on to ensure that it remains secure when exposed to sources of data that are beyond the enterprises control. From the data center to the cloud to the edge, AI will have to contend with a wide variety of vulnerabilities and an increasingly complex array of threats, nearly all of which will be driven by AI itself.

Meanwhile, the stakes will be increasingly high, given that AI is likely to provide the backbone of our healthcare, transportation, finance, and other sectors that are crucial to support our modern way of life. So before organizations start to push AI into these distributed architectures too deeply, it might help to pause for a moment to ensure that it can be adequately protected.

In a recent interview with VentureBeat, IBM chief AI officer Seth Dobrin noted that building trust and transparency into the entire AI data chain is crucial if the enterprise hopes to derive maximum value from its investment. Unlike traditional architectures that can merely be shut down or robbed of data when compromised by viruses and malware, the danger to AI is much greater because it can be taught to retrain itself from the data it receives from an endpoint.

The endpoint is a REST API collecting data, Dobrin said. We need to protect AI from poisoning. We have to make sure AI endpoints are secure and continuously monitored, not just for performance but for bias.

To do this, Dobrin said IBM is working on establishing adversarial robustness at the system level of platforms like Watson. By implementing AI models that interrogate other AI models to explain their decision-making processes, and then correct those models if they deviate from norms, the enterprise will be able to maintain security postures at the speed of todays fast-paced digital economy. But this requires a shift in thinking away from hunting and thwarting nefarious code to monitoring and managing AIs reaction to what appears to be ordinary data.

Already, reports are starting to circulate on the many ingenious ways in which data is being manipulated to fool AI into altering its code in harmful ways. Jim Dempsey, lecturer at the UC Berkeley Law School and a senior advisor to the Stanford Cyber Policy Center, says it is possible to create audio that sounds like speech to ML algorithms but not to humans. Image recognition systems and deep neural networks can be led astray with perturbations that are imperceptible to the human eye, sometimes just by shifting a single pixel. Furthermore, these attacks can be launched even if the perpetrator has no access to the model itself or the data used to train it.

To counter this, the enterprise must focus on two things. First, says Dell Technologies global CTO John Roese, it must devote more resources to preventing and responding to attacks. Most organizations are adept at detecting threats using AI-driven event information-management services or a managed-security service provider, but prevention and response are still too slow to provide adequate mitigation of a serious breach.

This leads to the second change the enterprise must implement, says Rapid7 CEO Corey Thomas: empower prevention and response with more AI. This is a tough pill to swallow for most organizations because it essentially gives AI leeway to make changes to the data environment. But Thomas says there are ways to do this that allow AI to function on the aspects of security it is most adept at handling while reserving key capabilities to human operators.

In the end, it comes down to trust. AI is the new kid in the office right now, so it shouldnt have the keys to the vault. But over time, as it proves its worth in entry-level settings, it should earn trust just like any other employee. This means rewarding it when it performs well, teaching it to do better when it fails, and always making sure it has adequate resources and the proper data to ensure that it understands the right thing to do and the right way to do it.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Read the rest here:

Can you trust AI to protect AI? - VentureBeat

Posted in Ai | Comments Off on Can you trust AI to protect AI? – VentureBeat

How Fighting AI Bias Can Make Fintech Even More Inclusive – InformationWeek

Posted: at 5:09 am

A key selling point for emerging fintech is the potential to expand financial access to more people -- but there is a potential for biases built into the technology to do the opposite.

The rise of online lenders, digital-first de novo banks, digital currency, and decentralized finance speaks to a desire for greater flexibility and participation in the money-driven world. While it might be possible to use such resources to better serve unbanked and underbanked segments of the population, how the underlying tech is encoded and structured might cut off or impair access for certain demographics.

Sergio Suarez Jr., CEO and founder of TackleAI, says when machine learning or AI is deployed to look for patterns and there is a history of marginalizing certain people, the marginalization effectively becomes data. TackleAI is a developer of an AI platform for detecting critical information in unstructured data and documents. If the AI is learning from historical data and historically, weve been not so fair to certain groups, thats what the AI is going to learn, he says. Not only learn it but reinforce itself.

Fintech has the potential to improve efficiency and democratization of economic access. Machine learning models, for example, have sped up the lending industry, shortening days and weeks down to seconds to figure out mortgages or interest rates, Suarez says. The issue, he says, is that certain demographics have historically been charged higher interest rates even if they met same criteria as another group. Those biases will continue, Suarez says, as the AI repeats such decisions.

Essentially the technology regurgitates the biases that people have held because that is what the data shows. For example, AI might detect names of specific ethnicities and then use that to categorize and assign unfavorable attributes to such names. This might influence credit scores or eligibility for loans and credit. When my wife and I got married, she went from a very Polish last name to a Mexican last name, Suarez says. Three months later, her credit score was 12 points lower. He says credit score companies have not revealed how precisely the scores were calculated, but the only material change was a new last name.

Structural factors with legacy code can also be an issue, Suarez says. For instance, code from the 1980s and early 1990s tended to treat hyphenations, apostrophes, or accent marks as foreign characters, he says, which gummed up the works. That can be problematic when AI built around such code tries to deal with people or institutions that have non-English names. If its looking at historical data its really neglecting years, sometimes decades worth of information, because it will try to sanitize the data before it goes into these models, Suarez says. Part of the temptation process is to get rid of things that look like garbage or difficult things to recognize.

An essential factor in dealing with possible bias in AI is to acknowledge that there are segments of the population that have been denied certain access for years, he says, and make access truly equal. We cant just continue to do the same things that weve been doing because well reinforce the same behavior that weve had for decades, Suarez says.

More often than not, he says, developers of algorithms and other elements that drive machine learning and AI do not plan in advance to ensure their code does not repeat historical biases. Mostly you have to write patches later.

Amazon, for example, had a now-scrapped AI recruiting tool that Suarez says gave much higher preference to men in hiring because historically the company hired more men despite women applying for the same jobs. That bias was patched and resolved, he says, but other concerns remain. These machine learning models -- no one really knows what theyre doing.

That brings into question how AI in fintech might decide loan interest rates are higher or lower for individuals. It finds its own patterns and it would take us way too much processing power to unravel why its coming to those conclusions, Suarez says.

Institutional patterns can also disparagingly affect people with limited income, he says, with fees for low balances and overdrafts. People who were poor end up staying poor, Suarez says. If we have machine learning algorithms mimic what weve been doing that will continue forward. He says machine learning models in fintech should be given rules ahead of time such as not using an individuals race as a data point for setting loan rates.

Organizations may want to be more cognizant of these issues in fintech, yet shortsighted practices in assembling developers to work on the matter can stymie such attempts. The teams that are being put together to work on these machine learning algorithms need to be diverse, Suarez says. If were going to be building algorithms and machine learning models that reflect an entire population, then we should have the people building it also represent the population.

Fintechs Future Through the Eyes of CES

PayPal CEO Discusses Responsible Innovation at DC Fintech

DC Fintech Week Tackles Financial Inclusivity

Read the rest here:

How Fighting AI Bias Can Make Fintech Even More Inclusive - InformationWeek

Posted in Ai | Comments Off on How Fighting AI Bias Can Make Fintech Even More Inclusive – InformationWeek

Narrow AI vs. General AI- What’s Next for the Future of Tech? – Analytics Insight

Posted: at 5:09 am

Narrow AI vs. General AI- Whats Next for the Future of Tech?

Artificial Narrow Intelligence (ANI) or Narrow AI, also known as Weak AI describes artificial intelligence systems that are specified to handle a singular or limited task.

Narrow AI is programmed to perform a single task like playing a particular game, analyzing data to create a report, checking the weather, etc.

Narrow AI acts similar to a computer system and displays a certain degree of intelligence in a particular field. It performs highly specialized tasks for humans, within that narrow field.

Some of the Narrow AI features that people use in their daily life includes self-driving cars, facial recognition tools, chatbots, spam filters, etc.

Artificial General Intelligence (AGI) or General AI, also known as Strong AI describes a certain mindset of AI development that aims to create intelligent machines which are indistinguishable from the human mind.

General AI is capable of performing any intellectual tasks that a human being can do. General AI acts similar to humans and copes with any generalized tasks which are asked of it.

General AI is where the technology sector is headed to. To reach general AI, computer hardware needs to increase in computational power to perform more total calculations per second (cps).

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Go here to see the original:

Narrow AI vs. General AI- What's Next for the Future of Tech? - Analytics Insight

Posted in Ai | Comments Off on Narrow AI vs. General AI- What’s Next for the Future of Tech? – Analytics Insight

What you should know about the metaverse, AI and supercomputers – World Economic Forum

Posted: at 5:09 am

Since the beginning of this year, there has been a lot of hype, skepticism, cynicism, and confusion surrounding the concept of the metaverse.

For some, it has added to the confusion of an already elusive world of augmented reality and mixed reality. But for the well-initiated, the metaverse is a landmark moment in the extended reality world; a world approaching the second life that many have long predicted.

News that some of the worlds top tech firms are rapidly developing AI supercomputers has further fueled that anticipation.

But what will the entry of supercomputers mean for the metaverse and virtual reality and how can we manage it responsibly?

Simply put, a supercomputer is a computer with a very high level of performance. That performance, which far outclasses any consumer laptop or desktop PC available on the shelves, can, among other things, be used to process vast quantities of data and draw key insights from it. These computers are massive parallel arrangements of computers or processing units which can perform the most complex computing operations.

Whenever you hear about supercomputers, youre likely to hear the term FLOPS floating point operations per second. FLOPS is a key measure of performance for these top-end processors.

Floating numbers, in essence, are those with decimal points, including very long ones. These decimal numbers are key when processing large quantities of data or carrying out complex operations on a computer, and this is where FLOPS comes in as a measurement. It tells us how a computer will perform when managing these complicated calculations.

The supercomputer market is expected to grow at a compound annual growth rate of about 9.5% from 2021 to 2026. Increasing adoption of cloud computing and cloud technologies will fuel this growth, as will the need for systems that can ingest larger datasets to train and operate AI.

The industry has been booming in recent years, with landmark achievements helping to build public interest, and companies all over the world are now striving to outcompete and outpace the competition on their own supercomputer projects.

In 2008, IBMs Roadrunner was the first to break the one petaflop barrier meaning it could process one quadrillion operations per second. According to one study, the Fugaku supercomputer, based in the RIKEN Centre for Computational Science in Kobe, Japan, is the worlds fastest machine. It is capable of processing 442 petaflops per second.

In late January, Meta announced on social media that it would be developing an AI supercomputer. If Metas prediction is true it will one day be the worlds fastest supercomputer.

Its sole purpose? Running the next generation of AI algorithms.

The first phase of its creation is already complete, and by the end of 2022 the second phase is expected to be finished. At that point, Metas supercomputer will contain some 16,000 total GPUs, and the company has promised that it will be able to train AI systems with more than a trillion parameters on data sets as large as an exabyte or one thousand petabytes.

While these numbers are impressive, what does this mean for the future of AI?

Meta has promised a host of revolutionary uses of its supercomputer, from ultrafast gaming to instant and seamless translation of mind-bendingly large quantities of text, images and videos at once think about a group of people simultaneously speaking different languages, and being able to communicate seamlessly. It could also be used to scan huge quantities of images or videos for harmful content, or identify one face within a huge crowd of people.

The computer will also be key in developing next-generation AI models, it will power the Metaverse, and it will be a foundation upon which future metaverse technologies can rely.

But the implications of all this power mean that there are serious ethical considerations for the use of Metas supercomputer, and for supercomputers more generally.

The World Economic Forums Centre for the Fourth Industrial Revolution, in partnership with the UK government, has developed guidelines for more ethical and efficient government procurement of artificial intelligence (AI) technology. Governments across Europe, Latin America and the Middle East are piloting these guidelines to improve their AI procurement processes.

Our guidelines not only serve as a handy reference tool for governments looking to adopt AI technology, but also set baseline standards for effective, responsible public procurement and deployment of AI standards that can be eventually adopted by industries.

We invite organizations that are interested in the future of AI and machine learning to get involved in this initiative. Read more about our impact.

New technologies have always demanded societal conversations about how they should be used and how they should not. Supercomputers are no different in this regard.

While AI has been brilliant at solving some large and complex problems in the world, there still remain some flaws. These flaws are not caused by the AI algorithms instead, they are a direct result of the data that is fed into the AI systems.

If the data fed into systems has a bias, then the result of an AI calculation is bound to carry that bias and, if the metaverse and virtual reality do become a second life, then are we bound to carry with us the flaws, prejudices and biases of the first life?

The age of AI also brings with it key questions about human privacy and the privacy of our thoughts.

To address these concerns, we must seriously examine our interaction with AI. When we look at the ethical structures of AI, we must ensure its usage is transparent, explainable, bias-free, and accountable.

We must be able to explain why a certain calculation or process was initiated in the first place, what exactly happened when the AI ran it, make sure there was no initial human bias against any group or idea, and be clear about who should be held accountable for the results of a calculation.

It remains to be seen whether these supercomputers and the companies producing them will ensure that these four key areas are consistently and transparently addressed. But it will become all the more pressing as they continue to wield more power and influence over our lives both online and in the real world.

The surge in the supercomputing era will push the era of parallel computing and use cases at the speed of thought. We see a future where a combination of supercomputers and intelligent software will run on a hybrid cloud, feeding partial workflows of computation to a quantum computer, a form of computing that experts believe has the capacity to exceed even that of the fastest supercomputers.

What remains to be seen is how this era will fuel the next generation of metaverse experiences.

Written by

Arunima Sarkar, Lead, Artificial Intelligence and Machine Learning, World Economic Forum

Nikhil Malhotra, Chief Innovation Officer, Tech Mahindra

The views expressed in this article are those of the author alone and not the World Economic Forum.

Read the rest here:

What you should know about the metaverse, AI and supercomputers - World Economic Forum

Posted in Ai | Comments Off on What you should know about the metaverse, AI and supercomputers – World Economic Forum

Ag Expo rolls out innovations in AI, robotics, electric vehicles – The Bakersfield Californian

Posted: at 5:09 am

Technology that farmers have only dreamed about will be on full display next week at Tulare's annual World Ag Expo.

New robotics, artificial intelligence and zero-emission vehicles look to steal the three-day show the largest of its kind at a time when problems like water and labor scarcity are giving inventors plenty to work on.

Advancements in computer software and hardware are being brought to bear on those and other challenges, either through new diagnostics and analysis or equipment built to help move crops more quickly or more cleanly.

Mechanization hasn't evolved to the point where Kevin Andrew, senior vice president of Bakersfield-based farming company Illume Ag, can stop hiring people to do the manual work of growing and harvesting grapes. He said even top international designers he has met with "sort of glaze over" when he explains the tasks involved.

Still, he said technologies that were only talked about five, 10 years ago, such as the latest AI and mechanization, "have kind of become front and center right now."

Andrew said he's particularly encouraged that many different companies have entered the race to come up with the best machines and that existing manufacturers keep introducing upgrades.

"The more companies that come into it," he said, "we'll have a better chance of getting something out of it."

The expo, which starts Tuesday at the International Agri-Center and ends Thursday, has named a top-10 products list that trends heavily toward computer technology.

Half the items on the list are actual robots. One by Nao Technologies emits no pollution as it uses artificial intelligence and automation on large-scale vegetable crops, reducing the need for herbicides as it collects useful data.

There's a "people-scaled collaborative robot" by Burro that works alongside farmworkers with the use of GPS and sensor equipment, and an autonomous sprayer by GUSS Automation LLC that's now more compact than earlier versions.

A robot by InsightTRAC rolls through orchards targeting pests and taking down data, while a dairy automation tool by Onfarm Solutions uses a gantry system to spray cows teats before and after milking.

Also on the top-10 list was an all-electric refrigeration truck by Hummingbird EV and an electric tractor by Solectrac. A mobile data-management tool by TJ Hoof Care made the ranking, as did software by Tule Technologies that helps withirrigation decisions. The other product on the list is a clip plug by Rain Bird for better water management.

Technology will also be a prominent topic on the expo's seminar lineup. On opening day alone there will be a talk on the prosperous use of artificial intelligence and a discussion of ransomware.

On Wednesday, a discussion is scheduled on the "bumpy road" from high-tech ideas to practical field equipment. The final day's seminars are on ag manufacturing and the future of electric vehicles, the "rise of autonomous machine functions" in agriculture and new technologies for improving food security.

Kern County grower John C. Moore III said he's one of those expo attendees who goes mainly to "attend the breakfasts, reconnect with colleagues and marvel at the new equipment," adding that he rarely, if ever, buys new equipment within six months of the event.

But he won't be surprised if some of his peers in local ag take great interest in some of the new innovations.

"Automation is No. 1 for everyone with overtime and minimum wage increases, absent any meaningful increases to most commodity prices," he said by text.

See the rest here:

Ag Expo rolls out innovations in AI, robotics, electric vehicles - The Bakersfield Californian

Posted in Ai | Comments Off on Ag Expo rolls out innovations in AI, robotics, electric vehicles – The Bakersfield Californian

Singapore releases software toolkit to guide financial sector on AI ethics – ZDNet

Posted: at 5:09 am

Singapore has released a software toolkit aimed at helping financial institutions ensure they are using artificial intelligence (AI) responsibly. Five whitepapers also have been issued to guide them on assessing their deployment based on predefined principles.

The Monetary Authority of Singapore (MAS) said the documents detailed methodologies for incorporating the FEAT principles--of Fairness, Ethics, Accountability, and Transparency--into the use of AI within the financial services sector.

The whitepapers were developed by the Veritas consortium, which is part ofSingapore's national AI strategyand comprises 27 industry players that include Amazon Web Services, Bank of China, Bank of Singapore, Google Cloud, Goldman Sachs, OCBC Bank, and Unionbank of the Philippines.

According to MAS, the whitepapers provide a FEAT checklist to guide financial institutions in their AI and data analytics software development lifecycles as well as an enhanced fairness assessment methodology to define the objectives of their AI and data analytics systems and identify potential bias.

There also is a methodology for assessing ethics and accountability, which offers a framework to help financial institutions carry out quantifiable measurement of ethical practices, and another for assessing transparency so these organisations can determine how much internal and external transparency is needed to explain and interpret predictions generated by machine learning models.

The Veritas consortium also developed the software toolkit to automate the fairness metrics assessment and facilitate visualisation of the assessment interface. Available on GitHub, the open source toolkit allows for plugins to enable integration with the financial institution's IT systems.

MAS' chief fintech officer Sopnendu Mohanty said in the statement released Friday: "The new open source software, assessment methodologies, and enhanced guidance will further improve the technical capabilities of financial institutions in developing responsible AI for the financial sector."

Some members of the Veritas consortium also applied the methodologies to various functions within their organisation, including customer marketing, insurance fraud detection, and credit risk scoring.

The group next would develop additional use cases and conduct pilots with selected financial institutions within the consortium to further integrate the methodologies with their existing governance framework.

MAS added that it was working with Infocomm Media Development Authority and Personal Data Protection Commission (PDPC) to include the tolkit in the PDPC's Trustworthy AI testing framework.

Read the rest here:

Singapore releases software toolkit to guide financial sector on AI ethics - ZDNet

Posted in Ai | Comments Off on Singapore releases software toolkit to guide financial sector on AI ethics – ZDNet