Daily Archives: January 14, 2022

The age of AI-ism – TechTalks

Posted: January 14, 2022 at 8:52 pm

By Rich Heimann

I recently read The Age of AI: And Our Human Future by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher. The book describes itself as an essential roadmap to our present and our future. We certainly need more business-, government-, and philosophical-centric books on artificial intelligence rather than hype and fantasy. Despite high hopes, in terms of its promise as a roadmap, the book is wanting.

Some of the reviews on Amazon focused on the lack of examples of artificial intelligence and the fact that the few provided, like Halicin and AlphaZero, are banal and repeatedly filled up the pages. These reviews are correct in a narrow sense. However, the book is meant to be conceptual, so few examples are understandable. Considering that there are no actual examples of artificial intelligence, finding any is always an accomplishment.

Frivolity aside, the book is troubling because it promotes some doubtful philosophical explanations that I would like to discuss further. I know what you must be thinking. However, this review is necessary because the authors attempt to convince readers that AI puts human identity at risk.

The authors ask, if AI thinks, or approximates thinking, who are we? (p. 20). While this statement may satiate a spiritual need by the authors and provide them a purpose to save us, it is unfair under the vague auspices of AI to even talk about such an existential risk.

We could leave it at that, but the authors represent important spheres of society (e.g., Silicon Valley, government, and academia); therefore, the claim demands further inspection. As we see governments worldwide dedicating more resources and authorizing more power to newly created organizations and positions, we must ask ourselves if these spheres, organizations, and leaders reflect our shared goals and values. This is a consequential inquiry, and to prove it, the authors determine the same pursuit. They declare that societies across the globe need to reconcile technology with their values, structures, and social contracts (p. 21) and add that while the number of individuals capable of creating AI is growing, the ranks of those contemplating this technologys implications for humanitysocial, legal, philosophical, spiritual, moralremain dangerously thin. (p. 26)

To answer the most basic question, if AI thinks,who are we? the book begins by explaining where we are (Chapter One: Where We Are). But, where we are is a suspicious jumping-off point because it is not where we are, and it indeed fails to tell us where AI is. It also fails to tell us where AI was as where we are is inherently ahistorical. AI did not start, nor end, in 2017 with the victory of AlphaZero over Stockfish in a chess match. Moreover, AlphaZero beating Stockfish is not evidence, let alone proof, that machines think. Such an arbitrary story creates the illusion of inevitability or conclusiveness in a field historically with neither.

The authors quickly turn from where we are into who we are. And, who we are, according to the authors, are thinking brains. They argue that the AI age needs its own Descartes by offering the reader the philosophical work of Ren Descartes. (p. 177) Specifically, the authors present Descartes dictum, I think, therefore I am, as proof that thinking is who we are. Unfortunately, this is not what Descartes meant with his silly dictum. Descartes meant to prove his existence by arguing that his thoughts were more real and his body less real. Unfortunately, things dont exist more or less. (Thomas Hobbes famous objection asked, Does reality admit of more and less?) The epistemological pursuit of understanding what we can know by manipulating what is, was not a personality disorder in the 17th century.

It is not uncommon to involve Descartes when discussing artificial intelligence. However, the irony is that Descartes would not have considered AI thinking at all. Descartes, who was familiar with the automata and mechanical toys of the 17th century, suggested that the bodies of animals are nothing more than complex machines. However, the I in Descartes dictum treats the human mind as non-mechanical and non-computational. Descartess dualism treats the human mind as non-computational and contradicts that AI is, or can ever, think. The double irony is that what Descartes thinks about thinking is not a property of his identity or his thinking. We will come back to this point.

To be sure, thinking is a prominent characteristic of being human. Moreover, reason is our primary means of understanding the world. The French philosopher and mathematician Marquis de Condorcet argued that reasoning and acquiring new knowledge would advance human goals. He even provided examples of science impacting food production to better support larger populations and science extending the human life span, well before they emerged. However, Descartess argument fails to show why thinking and not rage or love is as valid to least doubt ones existence.

The authors also imply that Descartess dictum meant to undermine religion by disrupting the established monopoly on information, which was largely in the hands of the church. (p. 20). While largely is doing much heavy lifting, the authors overlook that the Cogito argument (I think, therefore I am) was meant to support the existence of God. Descartes thought what is more perfect cannot arise from what is less perfect and was convinced that his thought of God was put there by someone more perfect than him.

Of course, I can think of something more perfect than me. It does not mean that thing exists. AI is filled with similarly modified ontological arguments. A solution with intelligence more perfect than human intelligence must exist because it can be thought into existence. AI is cartesian. You can decide if that is good or bad.

If we are going to criticize religion and promote pure thinking, Descartes is the wrong man for the job. We ought to consider Friedrich Nietzsche. The father of nihilism, Nietzsche, did not equivocate. He believed that the advancement of society meant destroying God. He rejected all concepts of good and evil, even secular ones, which he saw as adaptations of Judeo-Christian ideas. Nietzsches Beyond Good and Evil explains that secular ideas of good and evil do not reject God. According to Nietzsche, going beyond God is to go beyond good and evil. Today, Nietzsches philosophy is ignored because it points, at least indirectly, to the oppressive totalitarian regimes of the twentieth century.

This thought isnt endorsing religion, antimaterialism, or nonsecular government. Instead, this explanation is meant to highlight that antireligious sentiment is often used to swap out religious beliefs with studied scripture and moral precepts for unknown moral precepts and opaque nonscriptural. It is a kind of religion, and in this case, the authors even gaslight nonbelievers calling those that reject AI like the Amish and the Mennonites. (p. 154) Ouch. That said, this conversation isnt merely that we believe or value at all, something that machines can never do or be, but that some beliefs are more valuable than others. The authors do not promote or reject any values aside from reasoning, which is a process, not a set of values.

None of this shows any obsolescence for philosophyquite the opposite. In my opinion, we need philosophy. The best place to start is to embrace many of the philosophical ideas of the Enlightenment. However, the authors repeatedly kill the Enlightenment idea despite repeated references to the Enlightenment. The Age of AI creates a story where human potential is inert and at risk from artificial intelligence by asking who are we? and denying that humans are exceptional. At a minimum, we should embrace the belief that humans are unique with the unique ability to reason, but not reduce humans to just thinking, much less transfer all uniqueness and potential to AI.

The question, if AI thinks, or approximates thinking, who are we? begins with the false premise that artificial intelligence is solved, or only the details need to be worked out. This belief is so widespread that it is no longer viewed as an assumption that requires skepticism. It also represents the very problem it attempts to solve by marginalizing humans at all stages of problem-solving. Examples like Halicin and AlphaZero are accomplishments in problem-solving and human ingenuity, not artificial intelligence. Humans found these problems, framed them, and solved them at the expense of other competing problems using the technology available. We dont run around and claim that microscopes can see or give credit to a microscope when there is a discovery.

The question is built upon another flawed premise: our human identity is thinking. However, we are primarily emotional, which drives our understanding and decision-making. AI will not supplant the emotional provocations unique to humans that motivate us to seek new knowledge and solve new problems to survive, connect, and reproduce. AI also lacks the emotion that decides when, how, and should be deployed.

The false conclusion in all of this is that because of AI, humanity faces an existential risk. The problem with this framing, aside from the pesky, false premises, is that when a threat is framed in this way, the danger justifies any action which may be the most significant danger of all.

My book, Doing AI, explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving.

About the author

Rich Heimann is Chief AI Officer at Cybraics Inc, a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that explores what AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his book here.

Continued here:

The age of AI-ism - TechTalks

Posted in Ai | Comments Off on The age of AI-ism – TechTalks

Disrupting the economics of software testing through AI – SDTimes.com

Posted: at 8:52 pm

EMA (Enterprise Management Associates) recently released a report titled Disrupting the Economics of Software Testing Through AI. In this report, author Torsten Volk, managing research director at EMA, discusses the reasons why traditional approaches to software quality cannot scale to meet the needs of modern software delivery. He highlights five key categories of AI and six critical pain points of test automation that AI addresses.

We sat down with Torsten and talked about the report and his insights into the impact that AI is having in Software Testing:

Q: Whats wrong with the current state of testing? Why do we need AI?

Organizations reliant upon traditional testing tools and techniques fail to scale to the needs of todays digital demands and are quickly falling behind their competitors. Due to increasing application complexity and time to market demands from the business, its difficult for software delivery teams to keep up. There is a growing need to optimize the process with AI to help root out the mundane and repetitive tasks and control the costs of quality that have gotten out of control.

Q: How can AI help and with what?

There are five key capabilities where AI can help: smart scrawling/Natural Language Process (NLP) driven test creation, self healing, coverage detection, anomaly detection, and visual inspection. The report I wrote highlights six critical pain points where these capabilities can help. For example: false positives, test maintenance, inefficient feedback loops, rising application complexity, device sprawl, and tool chain complexity.

Leading organizations have already adopted some level of self-healing and AI driven test creation but by far the most impactful is Visual Inspection (or Visual AI), which provides complete and accurate coverage of the user experience. It is able to learn and adapt to new situations without the need to write and maintain code-based rules.

Q: Are people adopting AI?

Yes, AI adoption is on the rise for many reasons, but for me, its not that people are not adopting AI theyre adopting the technical capabilities that are based on AI. For example, people want the ability to do NLP-based test automation for a specific use case. People are more interested in the ROI gained from the speed and scalability of leveraging AI in the development process, and not necessarily how the sausage is being made.

Q: How does the role of the developer / tester change with the implementation of AI?

When you look at test automation, developers and testers need to make a decision about what belongs under test automation. How is it categorized, for example. Then all you need to do is basically set the framework for the AI to operate and provide it with feedback to continuously enhance its performance over time.

Once this happens, developers and testers are freed up to do more creative, interesting and valuable work by eliminating the toil of mundane or repetitive work the work that isnt valuable in and of itself but has to be done correctly every time.

For example, reviewing thousands of webpage renderings. Some of them have little differences, but they dont matter. If I can have the machine filter out all of the ones that dont matter and just highlight the few that may or may not be a defect, Ive now cut my work down from thousands to a very small handful.

Auto-classification is a great example of being able to reduce your work. If youre reducing repetitive work, it means you dont miss things. Whereas, if Im looking at the same, what looks like the same page each time, I might miss something. Whereas if I can have the AI tell me this one page is slightly different than the other ones youve been looking at, and heres why, iit eliminates repetitive, mundane tasks and reduces the possibilities of error-prone outcomes.

Q: Do I need to hire AI experts or develop an internal AI practice?

The short answer is no. There are lots of vendor solutions available that give you the ability to take advantage of the AI, machine learning and training data already in place.

If you want to implement AI yourself, then you actually need people with two sets of domain knowledge: first, the domain that you want for the application of AI, but second, a deep understanding of the possibilities with AI and how you can chain those capabilities together. Oftentimes, that is too expensive and too rare.

If your core deliverable is not the deliverable of the AI but the deliverable of the ROI that the AI can deliver, then its much better to find a tool or service that can do it for you, and allow you to focus on your domain expertise. This will make life much easier because there will be a lot more people in a company that understand that domain and just a small handful of people that will only understand AI.

Q: You talk about the Visual Inspection capability being the highest impact how does that help?

Training deep learning models to inspect an application through the eyes of the end user is critical to removing a lot of the mundane repetitive tasks that cause humans to be inefficient.

Smart crawling, self healing, anomaly detection, and coverage detection each are point solutions that help organizations lower their risk of blind spots while decreasing human workload. But, visual inspection goes even further by aiming to understand application workflows and business requirements.

Q: Where should I start today? Can I integrate AI into my existing Test Automation practice?

Yes example of Applitools Visual AI.

Q: Whats the future state?

Autonomous testing is the vision for the future, but we have to ask ourselves, why dont we have an autonomous car yet? Its because today, were still chaining together models and models of models. But ultimately, where were striving to get to is AI is taking care of all of the tactical and repetitive decisions and humans are thinking more strategically at the end of the process, where they are more valuable from a business-focused perspective.

Thanks to Torsten for spending the time with us and if you are interested in reading the full report http://applitools.info/sdtimes .

Originally posted here:

Disrupting the economics of software testing through AI - SDTimes.com

Posted in Ai | Comments Off on Disrupting the economics of software testing through AI – SDTimes.com

Concrete-AI Raises $2 Million to Commercialize Data Science Platform that Reduces the Cost and Embodied Carbon Footprint of Concrete – AiThority

Posted: at 8:52 pm

Concrete-AIannounced it has raised $2 million in a seed financing round with participation by the Grantham Foundation for the Protection of the Environment, a prominent family office and other marquee investors. This financing will accelerate the rollout of Concrete-AIs pioneering data science platform that uses artificial intelligence (AI) and machine learning (ML) to optimize supply chains and materials selection to bring new efficiencies to the design, proportioning and production of concrete mixtures. Concrete-AIs platform delivers unparalleled reductions in the cost and embodied carbon of ready mixed and precast concrete used in construction, without any changes in their method of production, the materials used or anything else.

In addition, the company announced that industry veteran Ryan Henkensiefken has joined the company as Vice President of Business Development. Henkensiefken has spent more than a decade in the concrete and chemicals industries. Most recently, he served as Market Development Manager for Master Builders Solutions (previously, BASF Construction Chemicals). Prior to this, he held business development, and engineering roles for Central Concrete Supply, a unit of U.S. Concrete.

Recommended AI News: LinksDAO and Five Iron Golf Announce Official Partnership, Bring First Real-Life Benefits to Thriving Web3 Community

During pre-commercial piloting with several of the largest cement, concrete and chemical admixtures manufacturers, including Summit Materials, U.S. Concrete, a Vulcan Materials Company and Votorantim Cimentos (Prairie Material), Concrete-AIs platform has been shown to reduce the material costs and embodied carbon footprint of ready-mixed concrete (RMC) by up to 10 percent, and up to 50 percent, respectively. It achieves these reductions by applying AI/ML-enabled concrete optimization to predict the performance of concrete as a function of its mixture proportions, and the characteristics of coarse and fine aggregates, supplementary cementitious materials (SCMs), and the chemical admixture type and dosage, etc. The result is a highly optimized, cost-effective concrete that fulfills all engineering performance characteristics such as slump, set time and strength, while utilizing locally available raw materials to ensure safety, longevity and code-compliance.

Concrete-AIs AI/ML approach for concrete proportioning helps solve some of the biggest challenges facing the industry: concrete overdesign; the embodied carbon footprint from cement (i.e., the glue or binder that holds the aggregates together to make concrete); increasing material cost; and reducing margins.

Recommended AI News: Crown Sterling Announces Launch of its Native App & Wallet

Traditionally, because it has been difficult to predict how the constituents of a concrete mixture will affect its performance, concrete formulations have been overdesigned such that they contain excess cement. In the U.S. alone, this overdesign costs the industry more than $1 billion annually, and results in 10 million tonnes of incremental carbon dioxide (CO2) emissions associated with cement production. If Concrete-AI were adopted globally carbon emissions from cement and concrete production could be reduced by 500 million tonnes per year.

Concrete-AI offers the construction sector a one-of-a-kind, capital-light, rapidly deployable, Software-as-a-Service (SaaS) solution that brings new performance and sustainability efficiencies to concrete production while leveraging existing supply chains, manufacturing processes, and the power of data, said Alex Hall, CEO of Concrete-AI. To reduce the embodied carbon footprint of concrete construction projects, we must use materials effectively and efficiently. Concrete-AI enables this while ensuring safety, peak engineering performance and sustainability by optimizing the use of cement, aggregates, and diverse SCMs in concrete, in an unparalleled manner by a data-driven approach. At a time when states and the federal government are increasingly requiring and incentivizing the reduction of embodied carbon in the built environment, Concrete-AI offers the industry the leading data-driven solution for ensuring cost-effective and sustainable construction.

The core Concrete-AI technology was developed at UCLAs Institute for Carbon Management (ICM) by Gaurav N. Sant and Mathieu Bauchy. Sant and Bauchy are faculty members in UCLAs Samueli School of Engineering in the Departments of Civil and Environmental Engineering. Sant is also a faculty member in the Department of Materials Science and Engineering and is the Director of UCLAs Institute for Carbon Management.

Recommended AI News: CryptoFights Integrates Fabriik Weave to Manage Player Crypto Swaps

[To share your insights with us, please write to sghosh@martechseries.com]

Read the rest here:

Concrete-AI Raises $2 Million to Commercialize Data Science Platform that Reduces the Cost and Embodied Carbon Footprint of Concrete - AiThority

Posted in Ai | Comments Off on Concrete-AI Raises $2 Million to Commercialize Data Science Platform that Reduces the Cost and Embodied Carbon Footprint of Concrete – AiThority

Why is the NFL crowd-sourcing feckless AI solutions to its concussion problem? – The Next Web

Posted: at 8:52 pm

The NFL and Amazon Web Services (AWS) today announced the results of their second annual artificial intelligence competition. In total, five teams split the $100K prize. And I cant think of any reason why you should care.

Up front: According to the NFL, the contest is supposed to help solve its injury problems using machine learning.

Heres what the league had to say in todays press release:

The NFL reviews game footage of all major injuries, analyzing each injury frame-by-frame from every angle, recording 150 different variables. The winners models automate that process, making review more comprehensive, accurate and 83 times faster than a person conducting the analysis manually.

Insights from the data will be used to inform the NFLs injury reduction efforts, which include driving innovation in protective equipment design, safety-based rules changes and improvements to coaching and training strategies.

Okay. So the NFL didnt challenge developers to create AI-powered sensors to detect impacts or machine-learning solutions to help guide medical professionals during on-field incident evaluations.

And it didnt use algorithms to parse injury scans in order to surface insights into the specific nature of football-related traumas.

No, what the NFL did was ask people to create algorithms that watch game tape.

Background: In what world is automating safety reviews a good idea? Its not even useful. Who cares if the AI is 83 times faster at reviewing footage than humans?

Its not like were dealing with millions of hours of footage and struggling to find ways to isolate plays for humans to catch.

When Tesla, for example, trains AI to drive its cars, it has to do so in a simulated space because its not feasible to have millions of agents driving millions of real-world miles. It would take too long and produce too much data for people to view.

But the NFL doesnt have that problem. Every game tape it produces is watched by millions and reviewed by thousands.

Quick take:The NFLs responded to its concussion crisis with a non-stop PR blitz. Its doing everything in its power to make it look like its solving the problem. However, despite its best marketing efforts, its only managed to reduce the concussion rate by about 25% in the past five years.

Furthermore, the league is worth in excess of $100 billion. If it wants to solve its injury problems, it should invest in real solutions instead of holding dumb contests that serve no purpose other than propaganda.

Nearly 15 million people watch the average NFL regular season game. We dont need help identifying plays where injuries occur. Players need help mitigating the occurrence of preventable injuries.

View post:

Why is the NFL crowd-sourcing feckless AI solutions to its concussion problem? - The Next Web

Posted in Ai | Comments Off on Why is the NFL crowd-sourcing feckless AI solutions to its concussion problem? – The Next Web

Growth Opportunities for Global Artificial Intelligence in the Automotive Industry – ResearchAndMarkets.com – Business Wire

Posted: at 8:52 pm

DUBLIN--(BUSINESS WIRE)--The "Growth Opportunities for Global Artificial Intelligence in Automotive" report has been added to ResearchAndMarkets.com's offering.

This research service examines the role artificial intelligence (AI) will play in the transformation of the automotive space. AI is a key disruptive technology, wherein automakers are evolving into technology firms and expanding their service offerings beyond manufacturing vehicles.

Technology implementation has increased, and the post-pandemic situation appears to be positive for all stakeholders; however, automakers have yet to fully harness AI's potential in their service offerings. Although AI is in the nascent stage of development, OEMs are adopting it across the automotive value chain to improve manufacturing and to enhance customer experience, marketing, sales, and after-sales services.

This report examines use cases and business opportunity areas for various players in the automotive ecosystem, including OEMs, Tier I suppliers and technology service providers, and new entrants or start-ups. As the industry continues to evolve, AI capabilities will become the core of automotive solutions.

The study identifies key AI trends impacting the industry, including the convergence of connectivity, autonomous, sharing/subscription, and electrification (CASE); the increasing use of digital assistants; and the emergence of cloud and data analytics. Discussion covers the adoption of various AI automotive industry elements and lists companies to watch out for in this space.

Additionally, this report guides market participants on how to chart their strategic priorities, such as partnerships, acquisitions, and new capabilities built to capitalize on growth opportunities in the automotive AI space. In conclusion, top growth opportunities are mapped out for automotive OEMs, Tier I suppliers, and technology solution providers.

Key Topics Covered:

1. Strategic Imperatives

2. Growth Dynamics - Drivers, Restraints, and Opportunities

3. Global OEM AI Roadmap - Introduction

4. Features Offered in Automotive AI

5. Application Areas for Automotive AI

6. Globally Launched New AI Features

7. Major Global Automotive AI Suppliers

8. Developments in AI-linked Products for Automotive

9. Opportunities Landscape

10. Growth Opportunity Universe

For more information about this report visit https://www.researchandmarkets.com/r/3w0rbg

More:

Growth Opportunities for Global Artificial Intelligence in the Automotive Industry - ResearchAndMarkets.com - Business Wire

Posted in Ai | Comments Off on Growth Opportunities for Global Artificial Intelligence in the Automotive Industry – ResearchAndMarkets.com – Business Wire

How the AI Revolution Impacted Chess (2/2) – Chessbase News

Posted: at 8:52 pm

See Part 1 of the series

In 2019, Dubov introduced many new ideas in a rare variation of the Tarrasch Defense, which quickly attracted attention at the top level. Several of the worlds best players have tried it, including Carlsen who employed it successfully in the 2019 World Rapid and Blitz Championships. Dubovs double-edged opening system is based around concepts that are suggested by the newer engines, including early h-pawn advances and pawn sacrifices for the initiative.

Note that both game annotations are based on work I did for my book, The AI Revolution in Chess.

At the top level these days, everyone uses neural network (or hybrid) engines. It is much less common to see the clash of styles between a classical and neural network engine, as occurred frequently in 2019 and 2020 (such as the first game of the previous article, Grischuk Nakamura).

Instead, we see a battle of AI-approved ideas in many games at the highest levels. This clash of preparation can rapidly drive opening theory forward. An example of how theory has advanced in a fashionable line of the Rossolimo is analysed below.

Navigating the Ruy Lopez Vol.1-3

The Ruy Lopez is one of the oldest openings which continues to enjoy high popularity from club level to the absolute world top. In this video series, American super GM Fabiano Caruana, talking to IM Oliver Reeh, presents a complete repertoire for White.

There are two final points regarding modern engines that I want to mention briefly: (1) practical use of these engines, and (2) the extent of their impact on chess.

The themes discussed in these two articles can be useful for the practical player. Besides providing creative and fresh opening ideas, modern engines can give insight into many types of positions that the classical ones struggled to play well in. Among others, strategic/closed middlegames and material imbalances have shown to be difficult for older engines to handle.Lastly, the originality of the newer engines play is an interesting discussion point. Have they invented new ideas or simply reintroduced old ones?

From my work on the topic, I saw that modern engines have done both: they found new ideas and drew attention to older ideas in every popular opening system. For example, advancing the h-pawn to h6 in the Grunfeld (e.g., Paravyan Wagner from the previous article) has been known for many years, long before Stockfish and AlphaZero. However, the point is that newer engines have a greater appreciation for such concepts, attracting the attention of top players during opening preparation, and thus increasing its popularity. This process of developing ideas applies to many other opening and middlegame concepts, several of which were examined in these two articles.

See the original post:

How the AI Revolution Impacted Chess (2/2) - Chessbase News

Posted in Ai | Comments Off on How the AI Revolution Impacted Chess (2/2) – Chessbase News

Predict, Optimize, Synchronize, Control: How AI Can Fulfill the Promise of Sustainable Energy Resources and Reshape the Future of Utilities – POWER…

Posted: at 8:52 pm

The world is changing rapidly as technology advances at breakneck speed. From the fourth industrial revolution and virtual reality to 5G and artificial intelligence (AI), our society is on the brink of tremendous technological upheaval. Although many industries evolve alongside innovations, some, such as utilities, have not moved at the same pace. This is in large part due to a range of complex existing barriers that most regulated industries face, with changing regulatory regimes and lack of funding.

Despite challenges, the utilities industry is long overdue for modernization. Just the fact that, of the G20 countries, which account for 80% of the worlds emissions, only six have formally increased their emissions reduction targets tells us how little we have collectively accomplished. While, from a distance, it looks like the energy industry is dragging its feet, the question of transitioning to green power is much more complicated. In terms of grid reliability and resilience, success will depend on how distributed energy resources (DERs) are integrated, optimized, synchronized, and controlled.

There is no irony lost in the fact that by its very nature, green energy relies on conditions in the environment, which are unpredictable without the right tools. This is where AI becomes a real game-changer. Industries like retail, insurance, and manufacturing have long relied on AI to increase productivity, assess risk, and improve returns on investments. For utilities, which are regulated and have a low risk tolerance, AI and machine learning (ML) can help manage and control todays dynamic, unpredictable electrical grids through a distributed AI framework applied to systems such as Energy Management Systems (EMS), Supervisory Control and Data Acquisition (SCADA) systems, Advanced Distribution Management Systems (ADMS), and Distributed Energy Resources Management Systems (DERMS). Heres how: The AI engine collects information from an internal database(s) and external data sources such as sensors, with ML occurring both locally and at the device level. This approach is best suited for making autonomous decisions at the edge of the grid, where latency control for physical devices is critical. In an ecosystem of grid devicesinverters, capacitors, and batterieseach device can be controlled in milliseconds.

First, a digital twin model is created that mirrors the physical environment, including every asset and its location. (NASA uses digital twins to simulate and assess conditions on board a spacecraft. Digital twin models are also giving enterprises more insight into their factories and systems to increase safety and productivity, and reduce equipment downtime.) Once the data is added to the digital twin, the AI-powered digital twin will run simulations, study performance, and identify possible improvements to maximize the desired performance of the original physical asset. Different rules can be applied to meet strategic and compliance goals, and the insights gained can then be dynamically applied back to the original physical asset using AI-based asset controllers. As more assets are added, a virtual environment is created where multiple different simulations can be performed, issues can be studied, and feedback can be provided to the physical assets in real-time via control signals.

Through continuous learning, the AI models continue to refine the data in real time, while incorporating rules and focusing on asset owner goals to maximize long-term performance. This consistent flow of real-time data and information allows the models to get smarter over time and learn from previous decisions. A comprehensive AI approach can also actively synchronize and optimize in real-time traditional and new DERswith each other and with the power gridwhich enables machine-to-machine communication and decisioning at the edge of the grid. This active synchronization capability assures that all assets under AI-based control work together to meet individual and system-wide goals.

An investor-owned utility in the southeast, for instance, is leveraging AI to transform its solar power plant into a dispatchable grid resource capable of supporting operational expansion requirements. With its AI solution, it can accurately predict its solar energy output, control the temperature of its inverters, and smooth out the solar energy utilizing battery optimization. This is helping to reduce asset maintenance costs, streamline decarbonization, and overcome renewable energy intermittency challenges.

As the AI model collects data, it delivers continuous five-minute-ahead and day-ahead forecasting of solar power output, which allows the utility to confidently lower the spinning reserves that are required to cover gaps in solar energy output. Likewise, the AI engine increases the reliability of the solar plant station by using battery storage for solar smoothingthe seamless transition between solar power, the grid, and energy devices that gives the grid and devices time to respond to the fluctuations in solar generation.

Equally important to a price-regulated industry like utilities, AI can help operators successfully compete in the energy markets. The AI engine automatically optimizes dispatch decisions around dynamic energy pricing, so that energy can be bought when it is cheapest and supplied when it is most valuable.

This AI-driven approach can also resolve the intermittency issues of green energy. On a particularly sunny day, solar energy output will be extremely high. But, when its cloudy or at nighttime, output will drop to zero. Traditional equipment, such as inverters, cannot handle such choppy outputs often associated with green energy. They were built with steady streams of energy in mindthink coal plants. AI gives utilities insight into potential spikes to allow the power grid to make appropriate adjustments in ramping up production and preventing equipment burn-outs.

With real-time data modeling of weather and power supply/demand, as well as device control, the utility now has the capability to:

The U.S. has some serious challenges ahead. Currently, California faces a 3,500-MW energy shortfall. In 2020, the state suffered rolling blackouts for the first time in nearly 20 years. The threat reemerged last summer when the California Independent System Operator declared a series of flex alerts and a stage-two emergency to reduce electrical load.

In response to this ongoing crisis, Gov. Gavin Newsom ordered the construction of temporary power generation facilities to be activated during emergencies. These four new power plants cost the states emergency fund nearly $200 million. Despite efforts to increase supply, it wont be enough, and it is too little, too late. Californias goal is to decarbonize energy by 2045, yet its natural gas plants are still needed to compensate for the inconsistent output of its solar and wind production.

On the other side of the country, the New England power grid is in an even more precarious situation. Officials from ISO-New England said that if generators fall short of fuel access this winter, rolling blackouts may be the only way to prevent a total grid crash. Clearly, the systems in place are insufficient, and building new power plants isnt the answer.

With unpredictability and unreliability the Achilles Heel of renewable energy resources, it only makes sense to utilize AI. In the past five years alone, the advancements have been nothing short of remarkable. AI including ML capabilities have already brought tremendous value to communications networks, especially at the application layer where these technologies are enabling and enhancing augmented and virtual realities, connected vehicles, robotics, smart appliances, medical diagnostics, supply chain logistics, fraud detection, and the discovery of anomalies in behavior patterns. When utilities fully embrace AI, the green energy revolution will be more than a distant proposition. In fact, it has the potential to shift the relationship between consumer and supplier, where prosumers generate and sell their own power via peer-to-peer markets. That future may be closer than we think.

Sean McEvoy is senior vice president of Veritones Energy division. He is a seasoned executive with more than 20 years of experience in the software industry. In his current role, he is responsible for business development for Veritones Artificial Intelligence platform.

Excerpt from:

Predict, Optimize, Synchronize, Control: How AI Can Fulfill the Promise of Sustainable Energy Resources and Reshape the Future of Utilities - POWER...

Posted in Ai | Comments Off on Predict, Optimize, Synchronize, Control: How AI Can Fulfill the Promise of Sustainable Energy Resources and Reshape the Future of Utilities – POWER…

4 Reasons to Invest in Nvidia’s AI in 2022 – Motley Fool

Posted: at 8:52 pm

Nvidia (NASDAQ:NVDA) has a long history of creating innovative and in-demand technology, and it's doing it again with artificial intelligence. In this Backstage Pass clip from "The AI/ML Show" recorded on Jan. 5, Motley Fool contributor Danny Vena explains why Nvidia is a top pick for investors interested in this space.

Danny Vena: When you talk about a company that's been around for a while, Nvidia actually is one of the pioneers in graphics processing. If you go back, there were other graphics processors in their simplest form, they've been available as early as 1976. But Nvidia revolutionized the field, introduced the modern graphics processor or GPU in 1999. The thing that really revolutionized this area of graphics processing, or the secret sauce, was the fact that modern GPUs enabled something called parallel processing, which is the ability to conduct a multitude of complex mathematical calculations simultaneously. That, in turn, produced more life-like graphics in video games.

This is a company that has been around for a while and it's interesting if you go back a few years, folks were dismissing Nvidia out of hand essentially saying, once the gaming market gets saturated, it's game over for Nvidia. That turned out not to be the case. Jensen Huang, who is the CEO of Nvidia, is a genius by all accounts and had some other ideas. First, let's talk a little bit about Nvidia and it's gaming segment. Nvidia holds a commanding 83 percent share of the discrete desktop GPU market according to Jon Peddie Research. AMD lost share to Nvidia in the most recent quarter, slipping to 17 percent from 19 percent. Not everybody agrees on the exact market share. Steam's numbers are a little different. They put Nvidia, AMD at 75 percent and 15 percent of the market, with Intel commanding the remaining 10 percent share. But regardless of which numbers you use, anybody that is a serious gamer has an Nvidia GPU as their primary gaming chip. I'm sure that's something that Jose could probably back me up on, right?

Jose Najarro: Yeah. More importantly, as a creator, when I do my video editing, I tend to look for a laptop or a desktop that usually has one of the high-end Nvidia graphics cards.

Danny Vena: Absolutely. What is impressive to me is that Nvidia is now more than just games. This is where the AI part comes in, a few years ago, when researchers were trying to build the first AI models that really worked, technology had not caught up with the idea yet. We needed massive lakes of data and we needed processors that could move that data around. At the time that this first came about in the '80s, the idea of deep learning, which is a specific technique within AI and it involves a computer model that's inspired by the structure and function of the human brain. We didn't have the technology to bring that to life yet.

But in the last few years, we've overcome some of those technological hurdles and researchers found that the parallel processing of a GPU that renders these more lifelike graphics was also perfectly suited to the unique needs of artificial intelligence. What it does is it uses sophisticated algorithms and millions of data points to reproduce the capacity of the human brain to learn. They develop these models, they feed them multitudes and legions of examples so it can differentiate and discover relationships and similarities, but also to distinguish differences. In the simplest form that the AI that Nvidia chips enable involves pattern recognition and making associations.

This massive computing power that's required is what the GPU provides, essentially accelerating the training of these deep learning systems. I think that that was a really important moment in Nvidia's history because the company's ability to pivot on this and essentially repurpose the humble old GPU to artificial intelligence. They didn't stop there, they also created packages, which essentially was hardware and software packaged together in such a way that it enabled companies to basically turnkey AI operations. A company that had never done anything with AI before could come in and they could buy one of Nvidia's supercomputers that had all of the AI functionality and could basically start AI models on Day 1. That was a really important, and that shows the forward thinking that comes from Nvidia's management.

When you think about the progression of AI, nowadays if you want to use AI, it's available in all of the world's major cloud computing providers have it. Nvidia expanded into the cloud. Really high-end versions of Nvidia GPUs are used in all of the major cloud computing operations. This includes, and you'll recognize all these names, Amazon's AWS, Microsoft Azure, Alphabet's Google Cloud, Alibaba Cloud, IBM Cloud, Baidu AI Cloud, Tencent Cloud, Oracle Cloud. They all use Nvidia GPUs for their high-performance computing needs and to power their AI systems. Now, this is important because this is an ongoing thing. These cloud computing operations, the amount of data that we're generating in the world is ridiculously high and it's doubling every few years. As that happens, they're going to need to build more data centers to handle the data. When they build more data centers, they're going to have to buy more Nvidia chips to run those data centers. You've got data centers, cloud computing, AI, all of these hot button areas of growth. Nvidia, you think back to the Gold Rush days.

This article represents the opinion of the writer, who may disagree with the official recommendation position of a Motley Fool premium advisory service. Were motley! Questioning an investing thesis -- even one of our own -- helps us all think critically about investing and make decisions that help us become smarter, happier, and richer.

Originally posted here:

4 Reasons to Invest in Nvidia's AI in 2022 - Motley Fool

Posted in Ai | Comments Off on 4 Reasons to Invest in Nvidia’s AI in 2022 – Motley Fool

MOSTLY AI raises $25 million to further commercialize synthetic data in Europe and the US – TechCrunch

Posted: at 8:52 pm

Austrian synthetic data startup MOSTLY AI today announced that it has raised a $25 million Series B round. British VC firm Molten Ventures led the operation, with participation from new investor Citi Ventures. Two existing investors also returned: Munich-based 42CAP, and Berlin-based Earlybird, which had led MOSTLY AIs $5 million Series A round in 2020.

Synthetic data is fake data, but not random: MOSTLY AI uses artificial intelligence to achieve a high degree of fidelity to its clients databases. Its data sets look just as real as a companys original customer data with just as many details, but without the original personal data points, the company says.

Talking to TechCrunch, MOSTLY AI CEO Tobias Hann said that the company plans to use the proceeds to push the boundaries of what its product can do, grow its team and gain more customers both in Europe and in the U.S., where it already has offices in New York City.

MOSTLY AI was founded in Vienna in 2017, and the General Data Protection Regulation (GDPR) was implemented across the EU one year later. This demand for privacy-preserving solutions and the concomitant rise of machine learning have created significant momentum for synthetic data. Gartner predicts that by 2024, 60% of the data used for the development of AI and analytics projects will be synthetically generated.

MOSTLY AIs typical clients are Fortune 100 banks and insurers, as well as telcos. These three highly regulated sectors drive most of the demand for synthetic tabular data, alongside healthcare.

Unlike some of its competitors, MOSTLY AI hasnt put its focus on healthcare in the past, but it could change. Its certainly something that we are watching closely and we are actually starting some pilot projects this year, the CEO said.

The democratization of AI means that synthetic data will eventually be used well beyond Fortune 100 companies, Hann told TechCrunch. His company therefore plans to serve smaller organizations and a wider range of sectors in the future. But until now, it made sense for MOSTLY AI to focus on enterprise-level clients.

At the moment, enterprise companies are the ones that have the budgets, need and sophistication to work with synthetic data, Hann said. To match their expectations, MOSTLY AI obtained ISO certifications.

Talking to Hann, one thing becomes clear: While the startup has a solid technical footing, it is equally invested in the commercialization of its technology and in the business value it can add for its clients. MOSTLY AI is leading this emerging and rapidly-growing space in terms of both customer deployments and expertise, Molten Ventures investment director Christoph Hornung said.

The need to comply with privacy laws such as the GDPR and CCPA clearly drives demand for synthetic data, but its not the only factor at play. For instance, demand in Europe is also driven by a wider cultural context; while in the U.S., it also results from a desire to innovate. For instance, use cases can include advanced analytics, predictive algorithms, fraud detection and pricing models but without data that can be traced back to specific users.

Many companies are proactively approaching the space because they understand that customers value privacy, Hann said. These companies understand that they can also gain a competitive advantage when dealing and working with data in a privacy-preserving way.

Seeing more U.S. companies wanting to adopt synthetic data in innovative ways is the key reason MOSTLY AI wants to grow its team in the U.S. But it is also recruiting more generally, both in Vienna and remotely. Its plan is to increase its headcount from 35 to 65 people by the end of the year.

Hann expects 2022 to be the year where synthetic data will take off, and beyond this year, a really strong decade for synthetic data. This will be supported by growing demand for responsible AI, articulated around key concepts such as AI fairness and explainability. Synthetic data helps answer these challenges. It enables enterprises to augment and de-bias their data sets, Hann said.

Machine learning aside, MOSTLY AI sees lots of potential for synthetic data to be leveraged in software testing. Supporting these use cases requires making synthetic data accessible not only to data scientists, but also to software engineers and quality testers. Its with them in mind that MOSTLY AI came up a few months ago with version 2.0 of its platform. MOSTLY AI 2.0 can be implemented on premise or in a private cloud, and adapts to different data structures of the company using it, the company wrote at the time.

We are clearly a B2B software infrastructure company, Hann said. Both in its Series A and B rounds, the company looked for investors who understood that approach.

Molten Ventures being a publicly listed VC and consequently not subject to typical funding cycles also carried some weight, Hann confirmed when I asked. Having this long-term commitment from a partner is something that was very appealing to us, because its a little more flexible.

It doesnt hurt either that Citi Ventures is the venture arm of Citigroup, and that it is headquartered in the U.S. Were significantly increasing the team in the U.S., and its always great to also have a U.S.-based investor that can help with network and relationships there, Hann said.

With $25 million in new funding and an increased U.S. presence, MOSTLY AI will now have more resources to compete against other companies in its segment of the synthetic data space. These include Tonic.ai, which raised a $35 million Series B last September; Gretel AI, which disclosed a $50 million Series B round last October; and seed-funded British startup Hazy, as well as players that focus on specific verticals.

We do see more and more players emerging in the space and in the market in general, so it certainly shows that theres a lot of interest there, Hann said.

Read more here:

MOSTLY AI raises $25 million to further commercialize synthetic data in Europe and the US - TechCrunch

Posted in Ai | Comments Off on MOSTLY AI raises $25 million to further commercialize synthetic data in Europe and the US – TechCrunch

2022 Top Predictions for AI in Finance IoT World Today – IoT World Today

Posted: at 8:52 pm

Whats likely to occur in artificial intelligence in the world of finance in 2022? Heres what AI experts had to say

Whats likely to occur in artificial intelligence in the world of finance in 2022? Heres what leading academics, analysts and AI experts had to say:

Lukasz Szpruch, associate professor at the University of Edinburghs School of Mathematics and program director of the Finance and Economics Programme at The Alan Turing Institute

Whatever you are trying to do using data-driven tools, the data is at the core of it. Weve learned that data is not perfect and that biases exist. The challenge with cases like fraud detection is that each financial institution is only seeing the world from the lens of its own data sets. So much more could be done if we were able to bring data from across other institutions and be able to track those malicious actors. This is an old idea thats being reheated as we can now do it better. The same idea is being used under the name of market generators in more quantitative financing, where we are now beginning to be able to automate pricing derivatives.

Alexander Harrowell, senior AI and IoT analyst at Omdia

Its still to be seen whether quantum computing will be genuinely useful and how quickly. Financial services customers have been some of the first to trial the technology in production, helped by a strong fit between problem sets such as portfolio optimization and the kind of binary optimizations that current quantum systems do well. Over the next two years, Omdias Quantum Computing: State of the Market Survey suggests we can expect a five times increase in projects in production, a 7.5x increase in pilot projects and the first scale-up projects. Many of these will be financial. This also means that financial users will be the first to encounter the problems. Half of our respondents said their biggest barrier to adoption was no understanding of what the technology can do. Over the next two years, they will be on point in finding out.

Kai Yang, chief data officer, APA at HSBC

We expect to see ethical AI frameworks becoming a more common feature of responsible corporate governance, as regulators take a stronger and more active stance on the fairness of banking processes and models. Customers too are demanding greater levels of transparency around how their data is used; hence, ethical AI culture will need to become an integral part of corporate identity. At HSBC we aim to take a leadership role, as one of the first financial service companies to create AI and data ethics principles, and recently partnering with the Monetary Authority of Singapore and the Alan Turing Institute to help develop a framework for responsible adoption of AI in the financial services industry.

Manuela Veloso, head of AI research, JPMorgan Chase

Through language and image processing and machine learning, AI will enable at large scale, the search and understanding of the never-decreasing available digital data. AI will help with data standardization, pattern detection, safe data sharing, prediction and anticipation. As we face increasingly complex decision making involving many participants and many objectives, we will rely on AI assistants to tediously analyze, simulate and evaluate large numbers of alternative solutions. Humans and AI will increasingly interact in a seamless integration of their capabilities in a continuous learning experience. AI systems will include explanations and actively request data and feedback to improve their assistance over time, with the goal to capture underlying human values and rules. Overall, we will continue to experience AI enabling human dreams to improve life in all sorts of ways, including health, finance, climate, energy, education, equality and social condition.

Felix Hoddinott, chief analytics officer, Quantexa

Historically, the complexity of deploying AI models for regulatory purposes has blocked AI initiatives within many financial institutions. But regulators are increasingly seeing evidence of the impactful improvements achievable from using AI applied to the wider data describing the full context around decisions. Regulators will now issue guidance to accelerate this use of AI, especially in areas like risk assessment and monitoring. This will not reduce the requirements for justifiable and fair models but clarified guidelines will be more clearly open to addressing these requirements through emerging technologies and methods. Establishing modern governance processes to simplify deployment in a regulatory space will reduce risk and improve customer experience.

Farouk Ferchichi chief data analytics officer, Envestnet

In 2022, AI will be harnessed in finance to create a hyper-personalized and unified customer experience, to reduce costs, and to target offers and cross-sell products. Given ongoing regulatory pressure, financial companies will utilize AI to improve and automate the monitoring of data quality, especially for product data that is used for regulatory reporting. In addition, the scope of model governance will continue to expand, with financial institutions having to rely on a combination of synthetic data to test models, as well as alternate data as a backup. AI can enable financial firms to segment product offers by market audience, and distribute them as part of an integrated, hyper-personalized omnichannel experience for customers.

Helen Sutton, SVP EMEA and APAC sales, Dataminr

Were seeing a growth in which financial services institutions (FSIs) are looking to implement more thoughtful digitalization. With that comes increased risk. In fact, over 700 organizations experienced a ransomware attack in Q2 of 2021 and the average ransomware pay-out has almost tripled what it was last year, with organizations paying $850,000 on average. I foresee that almost all FSIs will review in 2022, if not by Q4 of 2021, whether to create a shared threat intelligence organization between cyber and physical threats. More than ever, banks and insurers need to de-silo their critical information structures to ensure effective support and security when adopting new technologies and platforms. Well see further investment in the applications of AI that support this parallel trend.

Mike de Vere, CEO, Zest AI

We predict that AI will continue to push its way into more critical functions within the financial services industry. For example, were seeing AI-driven credit underwriting become more popular in unexpected places such as smaller regional lenders and credit unions. Theres an enormous data arbitrage to be gained by replacing legacy FICO scoring with AI-based models. Were talking upwards of 30% to 50% statistical improvement for hard-to-score consumers, which translates into hundreds of millions of dollars in profit for lenders. As more AI is integrated into businesses, well see fears around its use subside. Employees will become more comfortable with the technology and realize the potential it has to improve the overall quality of work their teams produce. The fears about the technology replacing employees will ultimately shift into an appreciation for the technologys capabilities.

Kenneth Chan, managing director and co-founder, ViewTrade Holding Corp.

It is no secret that AI has played a major role in the ongoing democratization of investing. My prediction for next year and beyond is that the major growth weve seen in retail investing will continue at a rapid pace and AI will continue to fuel that growth. AI has helped to level the playing field for investors. Today you dont have to be a high-net-worth (HNW) investor to get personalized financial advice, there is a chatbot for that. These AI-driven chatbots will only continue to get smarter. Machine learning can now sift through various financial accounts and profiles for a user and provide a snapshot of recommended to-dos on a dashboard. This will continue to gain traction in the decade ahead. AI has also helped to simplify the client onboarding process, while also enhancing the customer experience. Going forward, as the retail investing trend continues to grow expect AI to play a larger role in risk assessment, risk management, and fraud detection. This will enable businesses to scale and keep up with heavy volatility.

Read more:

2022 Top Predictions for AI in Finance IoT World Today - IoT World Today

Posted in Ai | Comments Off on 2022 Top Predictions for AI in Finance IoT World Today – IoT World Today