The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: January 24, 2022
Why User Education Is Necessary To Avoid AI Failure – Forbes
Posted: January 24, 2022 at 10:35 am
AI failure
The more a technology or concept permeates and gets normalized in our day-to-day lives, the more we grow to expect from it. About two decades ago, a sub-56kpbs dial-up internet connection seemed miraculous. Today, with internet speeds as high as 2000Mbps becoming normal, the 56Kbps connection would be considered a failure of sortsin the developed world, at least. This shift in expectation also applies to AI. Having seen numerous practical AI applications aid human convenience and progress, both the general population and the AI research community now expects every new breakthrough in the field to be more earth-shattering than the previous one. Similarly, what qualifies as AI failure has also seen a massive shift in recent years, especially from a problem owners perspective.
Just the fact that an AI model performs a specific function with expected levels of efficacy is no longer the only requisite for its applications to be considered successful. These systems must also be able to provide significant real-world gains in the form of time saved or revenue earned. For instance, a smart parking system that can predict parking availability with 99.7% accuracyalthough undoubtedly efficaciouscannot be considered successful if its real-world adoption does not lead to tangible gains. Even with such a system installed, parking lot managers or smart city administrators may not be able to make optimal use of their parking spaces due to a number of reasons. These could vary from simple causeslike parking lot operators not being able to use the software interface optimallyto complicated oneslike patrons and drivers struggling or hesitating to adapt to the new system. Due to such reasons and many others, only a fraction of AI projects are ever successful. The estimates for the total percentage of AI projects that fail to deliver real value range from 85% to 90%.
In most of these cases, the lack of tangible results achieved by AI systems has much less to do with the technological aspect and more to do with the human aspect of these systems. The success and failure of these projects depend on how the people interact with the technologies to achieve intended objectives.
As researchers continue to work and add to the body of AI research, the effectiveness of AI and AI-driven systems is constantly increasing. However, as powerful as it may be, any AI-driven tool is just thata tool. The success and failure of AI initiatives, more often than not, are determined by how the usersboth primary and secondaryperceive, receive and operate these AI systems.
Business leaders such as owners, directors and C-suite executivesoften end up being only secondary users of AI, or any other technological application for that matter. However, they are among the bigger beneficiaries as well as the biggest enablers of such initiatives. After all, it is often their will and wherewithal which matters while driving AI initiatives. So, the most common reasons for AI initiatives not delivering real value often involve a lack of buy-in from business leaders. Buy-in does not necessarily mean just a willingness to dispense funds for AI initiatives. An increasing number of businesses are investing in AI initiatives anyway, which means that AI failure does not necessarily stem from an absence of investment.
Today, buy-in is represented by a total conviction in a technology or investments ability to make an impact. This conviction results in a commitment to making these technological endeavors successful through means that involve more than just the technology itself. For instance, a business truly committed to the success of its AI initiatives will also invest in the non-core aspects of the initiatives, such as safety and privacy, among others. Ultimately, it is this commitment that ensures that they take all necessary steps to ensure AI success.
More often than not, AI-based applications do not entirely automate manual processes. They only automate the most analysis-intensive tasks. This means that human operators are necessary to leverage and augment the data processing capabilities of AI. This makes the role of human users extremely important for these AI applications.
Even the best AI-enabled business intelligence tools will prove useless if the executives using them arent trained to navigate the dashboards or to understand the data. This problem becomes even more pronounced where AI tools are involved at an operations level, such as computer vision-based handheld vehicle inspection tools or a mobile parking app that users can use to find and book parking spots. When the users are not trained enough to be able to navigate and use technological interfaces, the applications may not deliver the expected outcomes. Although a well-designed User Experience (UX) can go a long way in these circumstances, it is equally crucial for users to be educated about these applications.
Before practical training on how to use new AI applications, users should be given awareness training on how the new technology will add value to their work. More importantly, they should be convinced that the objective of technology is not to replace them but to augment their efforts. Thats because the fear of obsolescence is among the biggest underlying reasons for low user adoption.
Be it consciously or subconsciously, many workersmost of whom are potential AI usersfear becoming obsolete as AI becomes more commonplace. This perception of threat often manifests itself as an unwillingness to adopt the technology. The lack of enthusiasm then leads to a lack of involvement in training, which ultimately hampers the results of AI initiatives.
AI initiatives will only become successful and deliver significant ROI when all the usersfrom top executives to blue-collar workersare educated not about the technology but also their roles in .
Most AI applications are bespoke solutions to problems that are specific to the companies and customers using them. This means that there isnt a fixed playbook on coexisting with and using AI tools. Hence, it is unreasonable to expect the users of AI solutions to educate themselves on their organizations AI initiatives. Businessesalong with the AI implementation partnersmust come together to create case-specific user education strategies for the entire lifecycle of the AI solutions. By creating and executing these user education strategies, businesses can ensure that their people facilitate AI initiatives in more ways than one.
Why User Education Is Necessary To Avoid AI Failure
Before even an AI project starts, it is imperative to ensure that the top leadership of the organization is on board with the project. And that is exactly what top-level user education achieves. When business leaders and leading investors are aware of what results to expect from proposed AI initiatives, they are more comfortable investing in the same. However, it is equally important to establish expectations in terms of the input and support that will be required from the leadership in making an AI initiative a success. By creating awareness regarding the potential outcomes and expected support will ensure that AI projects have the structural support to be sustainable and successful. Making top-level decision makers aware of challenges will also minimize the chances of them withdrawing support when projects run into obstacles.
In addition to making the secondary users aware of the potential benefits of AI initiatives, it is crucial to make sure that the primary users are not just accepting but enthusiastic about the adoption of AI. At the end of the day, if the end users do not use the technology the way it is supposed to be used, the technology will never be able to deliver on expectations. So, part of the expectations from top-level leadership should be convincing the lower-level managers and employees of the value of proposed AI initiatives. The leaders can do this by first establishingthrough open and clear communicationthat the AI applications will not replace the human workforce but will augment it. Another way the leadership can accelerate user adoption is by providing adequate reskilling opportunities to employees so that they can be better operators of AI tools. Moreover, translating the broader advantages of the new AI solutions into individual benefits for workers in different roles will ensure that workers welcome the infusion of AI into daily operations.
Practical training on how to use technology should constitute the final leg of the user education strategy. Once the leadership and the end users are motivated enough to use the new AI solutions, they will be more receptive to instructions of use. As a result, they will be better able to contribute to AI initiatives and participate in their success.
This process of user education should not be viewed as a linear, one-time activity aimed at mitigating AI failure. It should be considered as a cycle that begins with the discovery of new applications and ends with these applications becoming an integral, value-adding part of regular business operations. Businesses aiming to implement AI in the near future can get started right now by educating their people on why they shouldnt view the AI-driven future with fear but with hope.
Read the original:
Why User Education Is Necessary To Avoid AI Failure - Forbes
Posted in Ai
Comments Off on Why User Education Is Necessary To Avoid AI Failure – Forbes
BrainChip Reflects on a Successful 2021, with Move to Market Readiness Behind Next-Generation Edge-Based AI Solutions – Business Wire
Posted: at 10:35 am
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY) is a leading provider of ultra-low power, high performance artificial intelligence technology and the worlds first commercial producer of neuromorphic AI chips and IP. BrainChip is looking forward to 2022 as it closes its most successful year ever buoyed by technological advancements made to its Akida technology, commercialization, additions of veteran leadership to both its management and Board of Directors, market exchange upgrades and more.
BrainChip saw its vision for brain-inspired Neuromorphic AI architecture move into production reality this year. This technology, which mimics the processing function and learning structure of the human brain, helps customers create ultra-low power products with the ability to perform classification entirely on-chip and to rapidly learn on-chip without the need to connect to the cloud.
Among the many milestones achieved this year, the Akida AKD1000 neuromorphic processor production chips were received from BrainChips manufacturing partner, SocioNext America and TSMC. BrainChip completed functionality and performance testing of the production chips and began volume production. This success enabled the company to start accepting and shipping orders of Akida development kits to its partners, large enterprises and OEMs for their own internal testing, validation, and product development. Additionally, BrainChip licensed its Akida IP to ASIC industry heavyweights MegaChips and Renesas to help enhance and grow their technology positioning for next-generation, cloud independent AI products.
This year also saw the introduction of MetaTF, a versatile ML framework that works within TensorFlow1, which allows people working in the convolutional neural network space to seamlessly transition to neuromorphic computing quickly and easily without having to learn anything new. The MetaTF development environment is an easy-to-use machine learning framework for the creation, training and testing of neural networks, supporting the development of systems for Edge AI on BrainChips Akida AKD1000 event domain neural processor. Over 4,500 potential customers have started to use MetaTF in 2021 alone.
Achieving rapid success in Akida product development allowed BrainChip to move its US headquarters to larger facilities to support expected customer growth as the company continues to move toward commercialization of its Akida AKD1000 neuromorphic processor and comprehensive development environment. The new 10,000 sq. ft. (929 square meters) facility is four times the size of its previous headquarters and provides the company with the ability to scale its services and processes needed to satisfy expected customer and support infrastructure needs.
BrainChip recently announced the appointment of Mr. Sean Hehir as BrainChips new CEO. He takes over from interim CEO Peter van der Made, who moves to his previous CTO position full time. Mr. Hehirs focus is to guide BrainChips progress towards full commercialization of the Akida AKD1000 chip and its IP. He is joined by non-executive director additions to the Companys Board former ARM executive Antonio J. Viana and innovation champion and strategic advisor Ms. Pia Turcinov.
As a public company, BrainChip received several upgrades to its market presence, which included its addition to the S&P/ASX 300 index, an update of its ticker symbol on the OTC market and a listing upgrade to the OTCQX Best market. BrainChip also launched a US based ADR (BCHPY) that allows BrainChip to continue its path of pursuing high accessibility to the US capital markets. Due to the strong demand for high-growth potential, Artificial Intelligence stocks in the US are expected to result in an influx of US investment and ultimately an increase in shareholder value. The companys successes have driven a rise of its stock price by 100 percent over the last 12 months
Other highlights from this year include being named among the EE Times Silicon 100, the launch of five sensor modalities of Akida (odor, vision, audio, tactile and gustation, which see real-world application in Covid-19 detection, facial recognition, voice recognition, vibration analysis and wine and beer taste recognition). Practical demonstrations in quality control and other industrial environments consume only microwatts to milliwatts of power. BrainChips founder Peter van der Made won the AI Hardware 2021 innovator award based on the development and production capabilities of Akida, the continuation of its highly popular This is Our Mission podcasts, and frequent speaking appearances at shows and events throughout the world.
One of the things that impressed me the most about BrainChip when looking to join the company was the quality and volume of successes it has achieved, and the dedicated and talented team that are the heart of BrainChips success, said Mr. Hehir. As the company transfers from strength of vision to strength of production, the possibilities are endless. BrainChip is moving to market readiness, expansion of a product portfolio, improvements to human resources, and improvements in the stock market. Im excited to see Akidas impact on the $46B USD Edge AI total addressable market (TAM) as BrainChip has unequivocally proved that it is the leader in the neuromorphic AI space. I look forward to seeing the kind of revolutionary moves we make in 2022.
About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is a global technology company that is producing a groundbreaking neuromorphic processor that brings artificial intelligence to the edge in a way that is beyond the capabilities of other products. The chip is high performance, small, ultra-low power and enables a wide array of edge capabilities that include on-chip training, learning and inference. The event-based neural network processor is inspired by the spiking nature of the human brain and is implemented in an industry standard digital process. By mimicking brain processing BrainChip has pioneered a processing architecture, called Akida, which is both scalable and flexible to address the requirements in edge devices. At the edge, sensor inputs are analyzed at the point of acquisition rather than through transmission via the cloud to a data center. Akida is designed to provide a complete ultra-low power and fast AI Edge Network for vision, audio, olfactory and smart transducer applications. The reduction in system latency provides faster response and a more power efficient system that can reduce the large carbon footprint of data centers.
Additional information is available at https://www.brainchipinc.com
Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006
1 TensorFlow is a registered trademark of Google LLC.
Read the original here:
Posted in Ai
Comments Off on BrainChip Reflects on a Successful 2021, with Move to Market Readiness Behind Next-Generation Edge-Based AI Solutions – Business Wire
What is the relationship between AI and 5G? – Ericsson
Posted: at 10:35 am
The impact of 5G
The commercial roll out of 5G is now under way. But simply put, 5G is not just another G. Its a complete ecosystem change in the way networks are run and managed, including how applications run on the network.
There are three main use case groups in 5G:
Other, emerging use case groups include massive machine type communication, or MTC.
This is where the connectivity and density of 5G really comes into play.
MTC enables the connectivity of a huge number of devices millions, billions of devices in fact, all of which are connected. Although theyre more likely to send very low data rates, the number of devices, and their long battery life means they can open the doors to brand new industrial use cases. For example, monitoring, farming, agriculture, transportation, automotive, smart cities, and healthcare could all transform thanks to MTC. Its all about connecting human expertise to a huge number of connected sensors for faster, more efficient insights.
Another emerging technology is ultra-reliable, low latency communications, or URLLC. This is where 5G shines. Use cases with URLLC can deliver very low latencies, down to one millisecond, which is a perfect solution for mission-critical use cases from vehicle to vehicle, remote diagnostics, or remote surgery.
URLLCs low latency is perfect for mission-critical use cases such as those in healthcare.
When it comes to 5G networks, AI is no longer a nice to have, but a must-have component to tackle the tremendous complexity that comes with 5G. AI along with the data and automation capabilities that come with it supports the diverse ecosystem of evolving networks in a way that humans alone are unable to manage.
The expectations of 5G are high due to its potential to transform industries. Service providers expect high performance, low latency, throughput and availability that 5G promises. As a result, the ability to operate 5G networks will need to speed up in fact, the development of high-level operational capabilities like zero-touch and self-healing networks are already in the works to meet this growing demand.
The evolution of networks involves some tough challenges, the first of which is data, particularly, how to shape network operations to be data-centric and data-driven. For example, the data elements within a 5G network are highly distributed. It comes in all shapes, sizes, and volumes. So how is it possible to efficiently manage this data? After all, data is what drives capabilities like machine learning and advanced analytics. Without it, we cant run future networks.
First, a clearly defined and executed data-driven strategy is crucial for service providers; one that drives how data should be managed across operations end-to-end, from ingestion all the way to final decision making.
Second, clear decisions need to be made around where and how data is processed, so AI logic can make timely decisions. For example, data could be transferred to a centralized cloud location to be processed for AI inference, but that may incur high transfer costs and additional delays especially for real-time use cases where decisions must be made in a split second. Instead, AI inference could be moved closer to the data source as well as creating a shorter and leaner pipeline.
Another important aspect is to ensure data quality and lineage, end-to-end, so decisions can be made based on trustworthy and high-quality data input. It makes no sense to rely on an AI logic if the data is corrupt.
And finally, organizational transitions regarding competence, technology development, and future-proofing employees skills are all additional challenges that can occur with 5G and AI adoption.
To overcome these new challenges, Ericsson changed its approach from being reactive to becoming much more proactive and predictive, which is the baseline of our AI modeling. The result is a model called the Ericsson Operations Engine. In parallel to our data-driven approach, were also upskilling our people who can see the network from an end-to-end perspective.
We also focus on data analytics, competency development, and 5G technologies, along with developing specific use case experts to support new industry requirements. We need to have the competence to understand the whole ecosystem of these emerging use cases not just an understanding of the technology, but also the various platforms and tools to help run network operations in a smoother and more automated way.
As service providers begin to offer 5G services to enterprises, the efficient use of the network will be crucial for them to keep costs down. This is where network slicing comes in.
Network slicing is a unique technology in 5G, where the network can be logically sliced end-to-end to deliver customizable service performance. Service providers will be able to slice the network into segments and offer different segments of the network to different enterprise customers and make sure they receive the performance level they're paying for.
Of course, slicing comes with its own challenges, including its technical complexity and the fact that it's cross domain. Consequently, we've been working on several advanced AI techniques to help customers prepare for the challenge of operating such complex systems. The future of networks ultimately lies in cognitive systems, where networks can apply a combination of machine learning and machine reasoning, which is built from knowledge base and reasoning engines to generate conclusions.
This technique is not only relevant to network slicing, but any complex network operations, because we often don't have enough data, or enough labels to train all the possible scenarios. This approach, however, enables the machine to learn on its own and make critical decisions itself without previously being trained or told to do so.
We believe this entire approach, along with intent-based operations, will be a critical step in making 5G operations as autonomous as possible. Exciting times are just around the corner.
This is how network operations can make 5G systems resilient.
Read all about driving 5G monetization through intent-based network operations.
Read more about Managed Services.
The rest is here:
Posted in Ai
Comments Off on What is the relationship between AI and 5G? – Ericsson
What you need to know about China’s AI ethics rules – TechBeacon
Posted: at 10:35 am
Late last year, China's Ministry of Science and Technologyissued guidelines on artificial intelligenceethics.The rulesstress user rights and data control while aligning with Beijing's goal of reining in big tech.China is now trailblazing the regulation of AI technologies, and the rest of the world needs to pay attention to what it's doing and why.
The European Union had issueda preliminary draft of AI-related rules in April 2021, but we've seen nothing final. In the United States, the notion of ethical AIhas gotten some traction, but there aren't any overarching regulations or universally accepted best practices.
China'sbroad guidelines, which are currently in effect,are part of the country's goal of becoming the global AI leader by 2030; they alsoalign with long-term social transformations that AI will bring and aim tofill the role of proactively managing and guiding those changes.
The National New Generation Artificial Intelligence Governance Professional Committeeis responsible for interpreting the regulationsand will guide their implementation.
Here are the most important parts of the directives.
Titled "New Generation Artificial Intelligence Ethics Specifications,"the guidelines list six core principles to ensure "controllable and trustworthy" AI systems and, at the same time, illustrate the extentof theChinese government's interest in creatinga socialist-driven and ethically focused society.
Here are the key portions of the specs,which can be useful in understanding the future direction of China'sAI.
The aim is to integrate ethics and morals into the entire lifecycle of AI;promote fairness, justice, harmony, and safety;and avoid issues such as prejudice, discrimination, and privacy/information leakage. The specification applies to natural persons, legal persons, and other related institutions engaged in activities connected to AI management, research and development, supply, and use.
According to the specs, the various activities of AI "shall adhere to the following basic ethical norms":
These ethical rules should be followed in AI-specific activities around themanagement, research and development, supply, and use ofAI.
The regulation specifies these goals when managing AI-related projects:
Under the rules, companies willintegrate AI ethics into all aspects of their technology-related research and development. Companies are to "consciously" engage inself-censorship, strengthen self-management, and refrain from engaging in any AI-relatedR&D that violates ethics and morality.
Another goal relates to improvedquality fordataprocessing, collection, and storage and enhanced security and transparency in terms of algorithm design, implementation, and application.
The guidelines also require companies to strengthen quality control by monitoring and evaluating AIproducts and systems.Related to this are the requirementsto formulate emergency mechanisms and compensation plans or measures, tomonitor AI systems in a timely manner, and toprocess user feedback and respond, alsoin a timely manner.
In fact, the ideas of proactive feedback and usability improvement are key. Companies must provide proactive feedback to relevant stakeholders and help solve problems such as security vulnerabilitiesand policy and regulationvacuums.
Keeping AI "under meaningful human control"in the Chinese AI ethics policy will no doubt draw comparisons to Isaac Asimov's Three Laws of Robotics.The bigger question is whether China, the United States,and the European Union can find commonality on AI ethics.
Without question, the application of AI is increasing. In my opinionthe United States still holds the lead, with China closing the gap and the EUfalling behind. This increasing use is driving many towardthe idea of developing an international, perhaps evenglobal, governance framework for AI.
When you compare the principles outlined by China and those of the European High-level Expert Group on AI, many aspects align. But the modus operandi is very different.
Let's consider the concept of privacy. The European approach to privacy, as illustrated by the General Data Protection Regulation (GDPR), protects an individual's data from commercial and state entities. In China, personal data is also protected,but, in alignment with the Confucian virtueof filial piety, only fromcommercial entities, not from the state. It is generally accepted by the Chinese people that the state has access to their data.
This issue alonemay be something that prevents a worldwide AI ethics framework from ever being fully developing. But it will be interesting to watch how this idea evolves.
The rest is here:
What you need to know about China's AI ethics rules - TechBeacon
Posted in Ai
Comments Off on What you need to know about China’s AI ethics rules – TechBeacon
Verte Releases AI-enabled Omnichannel Supply Chain Platform for 3PLs and Wholesalers – Business Wire
Posted: at 10:35 am
ATLANTA--(BUSINESS WIRE)--Verte, a leader in supply chain visibility, today announced the launch of new AI capabilities on its cloud supply chain platform. Artificial intelligence is transformational to the supply chain, and in the current e-commerce landscape, a platform with seamless, real-time connectivity is mandatory for all organizations.
Vertes technology solutions enable real-time supply chain visibility with a data-driven platform that centralizes information to increase transparency into the supply chain. Verte is uniquely positioned to provide aggregated data solutions that solve key industry challenges involving inventory and customer fulfillment, enabling retailers, shippers, 3PLs, and carriers to operate more effectively. It is the ultimate advanced connector to demand channels, available and in-transit merchandise, carriers, service providers, and physical locations of any kind.
The platforms AI prescriptive solution can be applied to the planning, manufacturing, and positioning of inventory across a complex retail fulfillment network using Vertes unified data-first approach. Vertes AI inventory management solutions analyze consumer fulfillment choices and shopping behaviors, improving e-commerce and store inventory levels. Any entity may subscribe to the service. Your operations, partner providers, merchants, service providers, and vendors can all participate and optimize.
We strongly believe that AI technology is going to have an enormous and beneficial impact on the e-commerce and supply chain industry in the coming years, which is why we are constantly advancing our AI capabilities. AI-driven solutions will be essential to e-commerce success. Organizations must leverage management tools with predictive analysis capabilities such as our new AI-enabled omnichannel platform, to improve the customer journey, said Shlomi Amouyal, Vertes co-founder and chief technology officer.
Sharing and receiving available-to-promise inventory with and from specific suppliers requires access control to reduce lead times and gain multi-enterprise visibility into each source. In addition, running everything under one cloud platform ensures that each fulfillment decision is aligned with the business priorities, whether that entails cost or time.
Vertes multi-cloud platform is built with distributed, microservices-driven disruptive architecture for current and future supply chain needs. Additionally, the platform enables 6X faster onboarding and 5X times speedier volume management. Features include dynamic integration for multi-channel order capture (for e-commerce, wholesale, and aggregation clients) and ready-to-start EDI Integration.
The platform supports real-time progress visibility across containers and order movement across demand channels, as well as a network facility with multi-dimensional analysis of customer preferences and experience. Additional assets include distributed dynamic network fulfillment execution and prescriptive dynamic replenishment.
Organizations can simplify their order compliance with flexible, configurable templates (50+ pre-built templates, from labels to BOLs) and obtain inventory reclassification in real-time through virtual inventory segmentation across channel sales. In addition, to optimize inventory movement and picking, the Verte platform offers multiple units of measure handling of inventory across demand channels and configurable, scalable, and load-balanced execution options through robotics and manual support.
Vertes new AI features also provide efficient route planning and shipments build-up, with adaptive changes to service level as needed. To manage and execute business delivery, organizations can utilize multi-mode last-mile delivery management, data management, and reporting analytics with transparency and traceability enabled through blockchain. The platform allows for automated billing and invoicing with support for transactional and value-added services, in addition to centralized returns management for omnichannel returns (store returns and/or e-commerce returns).
With Verte's machine-learning technology helping predict supply chain outcomes, sellers will be able to forecast their ability to meet a goal with a retailer or end-customer. Moreover, buyers will be able to make plans based on shipment timings, said Padhu Raman, Vertes co-founder and chief product officer.
About Verte:
Verte is an AI cloud-based supply platform provider that connects, unifies, and automates commerce operations, powering retailers to sell wherever their customers are and focus on scalable growth.
Were disrupting digital commerce with innovation and smart tools to help retailers compete without having to work harder. We deliver a cloud operating system that offers speed, flexibility, and intelligence to partners, providing one of the most advanced 3PL systems. We manage all back-end eCommerce operations in one place with a network of tech-enabled warehouses, inventory management software, and product tracking - underpinned by AI.
Here is the original post:
Verte Releases AI-enabled Omnichannel Supply Chain Platform for 3PLs and Wholesalers - Business Wire
Posted in Ai
Comments Off on Verte Releases AI-enabled Omnichannel Supply Chain Platform for 3PLs and Wholesalers – Business Wire
RDS and Trust Aware Process Mining: Keys to Trustworthy AI? – Techopedia
Posted: at 10:35 am
By 2024, companies are predicted to spend $500 billion annually on artificial intelligence (AI), according to the International Data Corporation (IDC).
This forecast has broad socio-economic implications because, for businesses, AI is transformativeaccording to a recent McKinsey study, organizations implementing AI-based applications are expected to increase cash flow 120% by 2030.
But implementing AI comes with unique challenges. For consumers, for example, AI can amplify and perpetuate pre-existing biasesand do so at scale. Cathy ONeil, a leading advocate for AI algorithmic fairness, highlighted three adverse impacts of AI on consumers:
In fact, a PEW survey found that 58% of Americans believe AI programs amplify some level of bias, revealing an undercurrent of skepticism about AIs trustworthiness. Concerns relating to AI fairness cut across facial recognition, criminal justice, hiring practices and loan approvalswhere AI algorithms have proven to produce adverse outcomes, disproportionately impacting marginalized groups.
But what can be deemed as fairas fairness is the foundation of trustworthy AI? For businesses, that is the million-dollar question.
AI's ever-increasing growth highlights the vital importance of balancing its utility with the fairness of its outcomes, thereby creating a culture of trustworthy AI.
Intuitively, fairness seems like a simple concept: Fairness is closely related to fair play, where everybody is treated in a similar way. However, fairness embodies several dimensions, such as trade-offs between algorithmic accuracy versus human values, demographic parity versus policy outcomes and fundamental, power-focused questions such as who gets to decide what is fair.
There are five challenges associated with contextualizing and applying fairness in AI systems:
In other words, what may be considered fair in one culture may be perceived as unfair in another.
For instance, in the legal context, fairness means due process and the rule of law by which disputes are resolved with a degree of certainty. Fairness, in this context, is not necessarily about decision outcomesbut about the process by which decision-makers reach those outcomes (and how closely that process adheres to accepted legal standards).
There are, however, other instances where application of corrective fairness is necessary. For example, to remedy discriminatory practices in lending, housing, education, and employment, fairness is less about treating everyone equally and more about affirmative action. Thus, recruiting a team to deploy an AI rollout can prove a challenge in terms of fairness and diversity. (Also read: 5 Crucial Skills That Are Needed For Successful AI Deployments.)
Equality is considered to be a fundamental human rightno one should be discriminated against on the basis of race, gender, nationality, disability or sexual orientation. While the law protects against disparate treatmentwhen individuals in a protected class are treated differently on purposeAI algorithms may still produce outcomes of disparate impactwhen variables, which are on-their-face bias-neutral, cause unintentional discrimination.
To illustrate how disparate impact occurs, consider Amazons same-day delivery service. It's based on an AI algorithm which uses attributessuch as distance to the nearest fulfillment center, local demand in designated ZIP code areas and frequency distribution of prime membersto determine profitable locations for free same-day delivery. Amazon's same-day delivery service was also found to be biased against people of coloureven though race was not a factor in the AI algorithm. How? The algorithm was less likely to deem ZIP codes predominantly occupied by people of colour as advantageous locations to offer the service. (Also read: Can AI Have Biases?)
Group fairness' ambition is to ensure AI algorithmic outcomes do not discriminate against members of protected groups based on demographics, gender or race. For example, in the context of credit applications, everyone ought to have equal probability of being assigned a good credit score, resulting in predictive parity, regardless of demographic variables.
On the other hand, AI algorithms focused on individual fairness strive to create outcomes which are consistent for individuals with similar attributes. Put differently, the model ought to treat similar cases in a similar way.
In this context, fairness encompasses policy and legal considerations and leads us to ask, What exactly is fair?
For example, in the context of hiring practices, what ought to be a fair percentage of women in management positions? In other words, what percentage should AI algorithms incorporate as thresholds to promote gender parity? (Also read: How Technology Is Helping Companies Achieve Their DEI Goals in 2022.)
Before we can decide what is fair, we need to decide who gets to decide that. And, as it stands, the definition of fairness is simply what those already in power need it to be to maintain that power.
As there are many interpretations of fairness, data scientists need to consider incorporating fairness constraints in the context of specific use cases and for desired outcomes. Responsible Data Science (RDS) is a discipline influential in shaping best practices for trustworthy AI and which facilitates AI fairness.
RDS delivers a robust framework for the ethical design of AI systems that addresses the following key areas:
While RDS provides the foundation for instituting ethical AI design, organizations are challenged to look into how such complex fairness considerations are implemented and, when necessary, remedied. Doing so will help them mitigate potential compliance and reputational risks, particularly as the momentum for AI regulation is accelerating.
Conformance obligations to AI regulatory frameworks are inherently fragmentedspanning across data governance, conformance testing, quality assurance of AI model behaviors, transparency, accountability, and confidentiality process activities. These processes involve multiple steps across disparate systems, hand-offs, re-works, and human-in-the-loop oversight between multiple stakeholders: IT, legal, compliance, security and customer service teams.
Process mining is a rapidly growing field which provides a data-driven approach for discovering how existing AI compliance processes work across diverse process participants and disparate systems of record. It is a data science discipline that supports in-depth analysis of how current processes work and identifies process variances, bottlenecks and surface areas for process optimization.
R&D teams, who are responsible for the development, integration, deployment, and support of AI systems, including data governance and implementation of appropriate algorithmic fairness constraints.
Legal and compliance teams, who are responsible for instituting best practices and processes to ensure adherence to AI accountability and transparency provisions; and
Customer-facing functions, who provide clarity for customers and consumers regarding the expected AI system inputs and outputs.
By visualizing compliance process execution tasks relating to AI training datasuch as gathering, labeling, applying fairness constraints and data governance processes.
By discovering record-keeping and documentation process execution steps associated with data governance processes and identifying potential root causes for improper AI system execution.
By analyzing AI transparency processes, ensuring they accurately interpret AI system outputs and provide clear information for users to trust the results.
By examining human-in-the-loop interactions and actions taken in the event of actual anomalies in AI systems' performance.
By monitoring, in real time, to identify processes deviating from requirements and trigger alerts in the event of non-compliant process tasks or condition changes.
Trust aware process mining can be an important tool to support the development of rigorous AI compliance best practices that mitigate against unfair AI outcomes.
That's importantbecause AI adoption will largely depend on developing a culture of trustworthy AI. A Capgemini Research Institute study reinforces the importance of establishing consumer confidence in AI: Nearly 50% of survey respondents have experienced what they perceive as unfair outcomes relating to the use of AI systems, 73% expect improved transparency and 76% believe in the importance of AI regulation.
At the same time, effective AI governance results in increased brand loyalty and in repeat business. Instituting trustworthy AI best practices and governance is good business. It engenders confidence and sustainable competitive advantages.
Author and trust expert Rachel Botsman said it best when she described trust as, the remarkable force that pulls you over that gap between certainty and uncertainty; the bridge between the known and the unknown.
Visit link:
RDS and Trust Aware Process Mining: Keys to Trustworthy AI? - Techopedia
Posted in Ai
Comments Off on RDS and Trust Aware Process Mining: Keys to Trustworthy AI? – Techopedia
Sustainability starts in the design process, and AI can help – MIT Technology Review
Posted: at 10:35 am
Artificial intelligence helps build physical infrastructure like modular housing, skyscrapers, and factory floors. many problems that we wrestle with in all forms of engineering and design are very, very complex problemsthose problems are beginning to reach the limits of human capacity, says Mike Haley, the vice president of research at Autodesk. But theres hope with AI capabilities, Haley continues This is a place where AI and humans come together very nicely because AI can actually take certain very complex problems in the world and recast them.
And where AI and humans come together is at the start of the process with generative design, which incorporates AI into the design process to explore solutions and ideas that a human alone might not have even considered. You really want to be able to look at the entire lifecycle of producing something and ask yourself, How can I produce this by using the least amount of energy throughout? This kind of thinking will reduce the impact of, not just construction, but any sort of product creation on the planet.
The symbiotic human-computer relationship behind generative design is necessary to solve those very complex problemsincluding sustainability. We are not going to have a sustainable society until we learn to build productsfrom mobile phones to buildings to large pieces of infrastructurethat survive the long-term, Haley notes.
The key, he says, is to start in the earliest stages of the design process. Decisions that affect sustainability happen in the conceptual phase, when you're imagining what you're going to create. He continues, If you can begin to put features into software, into decision-making systems, early on, they can guide designers toward more sustainable solutions by affecting them at this early stage.
Using generative design will result in malleable solutions that anticipate future needs or requirements to avoid having to build new solutions, products, or infrastructure. What if a building that was built for one purpose, when it needed to be turned into a different kind of building, wasn't destroyed, but it was just tweaked slightly?
Thats the real opportunity herecreating a relationship between humans and computers will be foundational to the future of design. The consequence of bringing the digital and physical together, Haley says, is that it creates a feedback loop between what gets created in the world and what is about to be created next time.
"What is Generative Design, and How Can It Be Used in Manufacturing?" by Dan Miles, Redshift by Autodesk, November 19, 2021
"4 Ways AI in Architecture and Construction Can Empower Building Projects" by Zach Mortice, Redshift by Autodesk, April 22, 2021
Laurel Ruma: From MIT Technology Review, I'm Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is about how to design better with artificial intelligence, everything from modular housing to skyscrapers to manufactured products and factory floors can be designed with and benefit from AI and machine learning technologies. As artificial intelligence helps humans with design options, how can it help us build smarter? Two words for you: sustainable design.
My guest is Mike Haley, the vice president of research at Autodesk. Mike leads a team of researchers, engineers, and other specialists who are exploring the future of design and making.
This episode of Business Lab is produced in association with Autodesk.
Welcome, Mike.
Mike Haley: Hi Laurel. Thanks for having me.
Laurel: So for those who don't know, Autodesk technology supports architecture, engineering, construction, product design, manufacturing, as well as media and entertainment industries. And we'll be talking about that kind of design and artificial intelligence today. But one specific aspect of it is generative design. What is generative design? And how does it lend itself to an AI-human collaboration?
Mike: So Laurel, to answer that, first you have to ask yourself: What is design? When designers are approaching a problem, they're generally looking at the problem through a number of constraints, so if you're building a building, there's a certain amount of land you have, for example. And you're also trying to improve or optimize something. So perhaps you're trying to build the building with a very low cost, or have low environmental impact, or support as many people as possible. So you've got this simultaneous problem of dealing with your constraints, and then trying to maximize these various design factors.
That is really the essence of any design problem. The history of design is that it is entirely a human problem. Humans may use tools. Those tools may be pens and pencils, they may be calculators, and they may be computers to solve that. But really, the essence of solving that problem lies purely within the human mind. Generative design is the first time we're producing technology that is using the computational capacity of the computer to assist us in that process, to help us go beyond perhaps where our usual considerations go.
As you and I'm sure most of the audience know, people talk a lot about bias in AI algorithms, but bias generally comes from the data those algorithms see, and the bias in that data generally comes from humans, so we are actually very, very biased. This shows up in design as well. The advantage of using computational assistance is you can introduce very advanced forms of AI that are not actually based on data. They're based on algorithmic or physical understandings of the world, so that when you're trying to understand that building, or design an airplane, or design a bicycle, or what it might be, it can actually use things like the laws of physics, for example, to understand the full spread of possible solutions to address that design problem I just talked about.
So in some ways, you can think of generative design as a computer technology that allows designers to expand their minds and to explore spaces and possibilities of solutions that they perhaps wouldn't go otherwise. And it might even be outside of their traditional comfort zone, so biases might prevent them from going there. One thing you find with generative design is when we watch people use this technology, they tend to use it in an iterative fashion. They will supply the problem to the computer, let the computer propose some solutions, and then they will look at those solutions and then begin to adjust their criteria and run it again. This is almost this symbiotic kind of relationship that forms between the human and the computer. And I really enjoy that because the human mind is not very good at computing. The popular idea is you can hold seven facts in your head at once, which is a lot smaller than the computer, right?
But human minds are excellent at responding and evaluating situations and bringing in a very broad set of considerations. That in fact is the essence of creativity. So if you bring that all together and look at that entire process, that is really what generative design is all about.
Laurel: So really what you're talking about is the relationship between a human and a computer. And the output of this relationship is something that's better than either one could do by themselves.
Mike: Yes, that's right. Exactly. I mean, humans have a set of limitations, and we have a set of skills that we bring together really when we're being creative. The same is true of a computer. The computer has certain things like computation, for example, and understanding the laws of physics and things like that. But it's far better than we are. But it's also highly limited in being able to evaluate the efficacy of a solution. So generative is really about bringing those two things together.
Laurel: So there's been a lot of discussion about how AI and automation replacing workers is a fear. What is the AI human collaboration that you're envisioning for the future of work? How can this partnership continue?
Mike: There's an incredibly interesting relationship between AI and actually not just solving problems in the world together with humans, but also improving the human condition. So when we talk about the tension between AI and human work, I really like to look at it through that lens, so that when we think of AI learning the world, learning how to do things, that can lead to something like automation. Those learningsthose digital learningscan drive things like a robot, or a machine in a factory, or a machine in a construction site, or even just a computer algorithm that can decide on something for you.
That can be powerful if managed appropriately. Of course, you've always got the risks of bias and unfairness and those kinds of things that you have to be aware of. But there's another effect of AI learning: it is now able to also better understand what a human being is doing. So imagine an AI that watches you type in a word processor, for example. And it watches you type for many, many years. It learns things about your writing style. Now one of the obvious automation things it can do is begin to make suggestions for your writing, which is fine. We're beginning to see that today already. But something it could also do is actually begin to evaluate your writing and actually understand, maybe in a very nuanced way, how you compare to other writers. So perhaps you're writing a kind of fiction, and it's saying, "Well, generally in this realm of fiction, people that write like you are targeting these sorts of audiences. And maybe you want to consider this kind of tone, or nature of your writing."
In doing that, the AI is actually providing more tuned ways of teaching you as a human being through interpretation of your actions and working again in a really iterative way with a person to guide them to improve their own capability. So this is not about automating the problem. It's actually in some ironic way, automating the process of training a person and improving their skills. So we really like to put that lens on AI and look at that way in that, yes, we are automating a lot of tasks, but we can also use that same technology to help humans develop skills and improve their own capacity.
The other thing I will mention in this space is that many problems that we wrestle with in all forms of engineering and design are very, very complex, and we're talking about some of them right now. Those problems are beginning to reach the limits of human capacity. We have to start simplifying them in some ways. This is a place where AI and humans come together very nicely because AI can actually take certain very complex problems in the world and recast them. They can be recast or reinterpreted into language or sub problems that human beings can actually understand, that we can wrestle with and provide answers. And then the AI can take those answers back and provide a better solution to whatever problem we happen to be wrestling with at that time.
Laurel: So speaking of some of those really difficult problems, climate change, sustainability, that's certainly one of those. And you actually wrote, and here's a quote from your piece, quote, "Products need to improve in quality because an outmoded throw-away society is not acceptable in the long-term." So you're saying here that AI can help with those types of big societal problems too.
Mike: Yeah, exactly. This is exactly the kind of difficult problem that I was just talking about.For example, how many people get a new smartphone, and within a year or two, you're tossing it to get your new one? And this is becoming part of just the way we live. And we are not going to have a sustainable society until we actually learn to build products, and products can be anything from a mobile phone to a building, or large pieces of infrastructure, that survive long-term.
Now what happens in the long-term? Generally, requirements change. The power of things change. People's reaction to that, again, like I just said, is to throw them away and create something new. But what if those things were amenable to change in some ways? What if they could be partially recreated halfway through their lifespan? What if a building that was built for one purpose, when it needed to be turned into a different kind of building, wasn't destroyed, but it was just tweaked slightly? Because when the designer first designed that building, there was a way to contemplate what all future users of that building could be. What are the patterns of those? And how could that building be designed in such a way to support those future uses?
So, solving that kind of design problem, solving a problem where you're not just solving your current problem, but you're trying to solve all the future problems in some ways is a very, very difficult problem. And it was the kind of problem I was talking about earlier on. We really need a computer to help you think through that. In design terms, this is what we call a systems problem because there's multiple systems you need to think of, a system of time, a system of society, of economy, of all sorts of things around it you need to think through. And the only way to think through that is with an AI system or a computational system being your assistant through that process.
Laurel: I have to say that's a bit mind bending, to think about all the possible iterations of a building, or an aircraft carrier, or even a cell phone. But that sort of focus on sustainability certainly changes how products and skyscrapers and factory floors are designed. What else is possible with AI and machine learning with sustainability?
Mike: We tend to think normally along three axes. So one of the key issues right now we're all aware of is climate change, which is rooted in carbon. And many, many practices in the world involve the production of enormous amounts of carbon or what we call retained carbon. So if we're producing concrete, you're producing extra carbon in the atmosphere. So we could begin to design buildings, or products, or whatever it might be, that either use less carbon in the production of the materials, or in the creation of the structures themselves, or in the best case, even use things that have negative carbon.
For example, using a large amount of timber in a building can actually reduce overall carbon usage because at the lifetime that tree was growing, it consumed carbon. It embodied the carbon inside the atmosphere into itself. And now you've used it. You've trapped it essentially inside the wood, and you've placed that into the building. You didn't create new carbon as a result of producing the wood. Embodied energy is something else we think of too. In creating anything in the world, there is energy that is going to go into that. That energy might be driving a factory, but that energy could be shipping products or raw materials across the world. You really want to be able to look at the entire lifecycle of producing something and ask yourself, "How can I produce this by using the least amount of energy throughout?" And you will have a lower impact on the planet.
The final example is waste. This is a very significant area for AI to have an effect because waste in some ways is about a design that is not optimal. When you're producing waste from something, it means there are pieces you don't need. There's material you don't need. There's something coming out of this which is obviously being discarded. It is often possible to use AI to evaluate those designs in such a way to minimize those waste-ages, and then also produce automations, like for example, a robot saw that can cut wood for a building, or timber framing in a building, that knows the amount of wood you have. It knows where each piece is going to go. And it's kind of cutting the wood so that it's sure that it's going to produce as little off cuts that are going to be thrown away as possible. Something like that can actually have a significant effect at the end of the day.
Laurel: You mentioned earlier AI could help, for example, something writing, and how folks write and their styles, etc. But also, understanding systems and how systems work is also really important. So how could AI and ML be applied to education? And how does that affect students and teaching in general?
Mike: One of the areas that I'm very passionate about where generative design and learning come together is around a term that we've been playing around with for a while in all of this research, which is this idea of generative learning, which is learning for you. a little bit along the lines of some of the stuff we talked about before, where you're almost looking at the human as part of a loop together with the computer. The computer understands what you're trying to do. It's learning more about how you compare to others, perhaps where you could improve in your own proficiencies. And then it's guiding you in those directions. Perhaps it's giving you challenges that specifically push you on those. Perhaps it's giving you directions. Perhaps it's connecting you with others that can actually help improve you.
Like I said, we think of that as sort of a generative learning. What you're trying to optimize here is not a design, like what we talked about before, but we're trying to optimize your learning. We're trying to optimize your skillset. Also, I think underlying a lot of this as well is a shift in a paradigm. Up until fairly recently, computers were really just seen as a big calculator. Right? Certainly in design, even in our software here at Autodesk. I mean, the software was typically used to explore a design or to document a design. The software wasn't used to actually calculate every aspect of the design. It was used really in some ways as a very complex kind of drafting board, in some sense.
This is changing now with technologies like generative design, where you really are, like I talked about earlier, working in the loop with the computer. So the computer is suggesting things to you. It's pushing you as a designer. And you as a designer are also somewhat of a curator now. You're reacting to things that the computer is suggesting or providing to you. So embracing this paradigm early on in education, with the students coming into design and engineering today, is really, really important. I think that they have an opportunity to take the fields of design and engineering to entirely different levels as the result of being able to use these new capabilities effectively.
Laurel: Another place that this has to be also applied is the workplace. So employees and companies have to understand that the technology will also change the way that they work. So what tools are available to navigate our evolving workplace?
Mike: Automation can have a lot of unintended side effects in a workplace. So one of the first things any company has to do is really wrestle with that. You have to be very, very real about what's the effect on your workforce. If automation is going to be making decisions, what's the risk that those decisions might be unfair or biased in some ways? One of the things that you have to understand is that this is not just a plug it in, switch it on, and everything's going to work. You have to even involve your workforce right from the beginning in those decisions around automation. We see this in our own industry, the companies that are the most successful in adopting automation are the ones that are listening the most closely to their workforce at the same time.
It's not that they're not doing automation, but they're actually rolling it out in a way that's commensurate with the workforce, and there's a certain amount of openness in that process. I think the other aspect that I like to look at from a changing work environment is the ability to focus our time as human beings on what really matters, and not have to deal with so much tedium in our lives. So much of our time using a computer is tedious. You're trying to find the right application. You're trying to get help on something. You're trying to work around some little thing that you don't understand in the software.
Those kinds of things are beginning to fall away with AI and automation. And as they do, we've still got a fair way to go on that. But as we go further down the line on that, what it means is that creative people can spend more time being creative. They can focus on the essence of a problem. So if you're an architect who is laying out desks in an office space, you're probably not being paid to actually lay out every desk. You're being paid to design a space. So what if you design the space and the computer actually helps with the actual physical desk layout? Because that's a pretty simple thing to kind of automate. I think there's a really fundamental change in where people will be spending their time and how they'll be actually spending their time.
Laurel: And that kind of comes back to a topic we just talked about, which is AI and ethics. How do companies embrace ethics with innovation in mind when they are thinking about these artificial intelligence opportunities?
Mike: This is something that's incredibly important in all of our industries. We're seeing this rise, the awareness of this rise, obviously it's there in the popular society right now. But we've been looking at this for a while, and a couple of learnings I can give you straight off the bat is any company that's dealing with automation and AI needs to ensure that they have support for an ethical approach to this right from the very top of the company because the ethical decisions don't just sit at the technical level, they sit at all levels of decision making. They're going to be business decisions. They're going to be market decisions. They're going to be production decisions, investment decisions and technology decisions. So you have to make sure that it's understood within any corporate or industrial environment.
Next is that everybody has to be aligned internally to those organizations on: What does ethics actually mean? Ethics is a term that's used pretty broadly. But when it actually gets down to doing something about it, and understanding if you're being successful at it, it's very important to be quite precise on it. This brings me to the third point, that if you are going to announce, if you've done that, and you now have an understanding of what it is, you now need to make sure that you're solving a concrete problem because ethics can be a very, very fuzzy topic. You can do ethics washing very, very easily in an organization.
And if you don't quickly address that and actually define a very specific problem, it will continue to be fuzzy, and it will never have the effect that you would like to see within a company. And the last thing I will say is you have to make it cultural. If you are not ensuring that ethical behavior is actually part of the cultural values of your organization, you're never going to truly practice it. You can put in governance structures, you can put in software systems, you can put in all sorts of things that ensure a fairly high level of ethics. But you'll never be certain that you're really doing it unless it's embedded deeply within the culture of actually how people behave within your organization.
Laurel: So when you take all of this together, what sorts of products or applications are you seeing in early development that we can expect or even look forward to in the next, say, three to five years?
Mike: There's a number of things. The first category I like to think of is the raise-all-the-boats category, which means that we are beginning to see tools that just generally make everybody more efficient at what they do, so it's similar to what I was talking about earlier on about the architect laying out desks. It could be a car designer that is designing a new car. And in most of today's cars, there's a lot of electrical wiring. Today, the designer has to route every cable through that car and show, tell the software exactly where that cable goes. That's not actually very germane to the core design of the car, but it's a necessary evil to specify the car. That can be automated.
We're beginning to see these fairly simple automations beginning to become available to all designers, all engineers, that just allow them to be a little bit more efficient, allow them to be a little bit more precise without any extra effort, so I like to think of that as the raise-all-the-boats kind of feature. The next thing, which we touched on earlier in the session, was the sustainability of solutions. It turns out that most of the key decisions that affect the sustainability of a product, or a building, or really anything, happen in the earliest stages of the design. They really happen in this very sort of conceptual phase when you're imagining what you're going to create. So if you can begin to put features into software, into decision-making systems early on, they can guide designers towards more sustainable solutions through affecting them at this early stage. That's the next thing I think we're going to see.
The other thing I'm seeing appears quite a lot already, and this is not just true in AI, but it's just generally true in the digital space, is the emergence of platforms and very flexible tools that shape to the needs of the users themselves. When I was first using a lot of software, as I'm sure many of us remember, you had one product. It always did a very specific thing, and it was the same for whoever used it. That era is ending, and we're ending up seeing tools now that are highly customizable, perhaps they're even automatically reconfiguring themselves as they understand more about what you need from them. If they understand more about what your job truly is, they will adjust to that. So I think that's the other thing we're seeing.
The final thing I'll mention is that over the next three to five years, we're going to see more about the breaking down of the barrier between digital and physical. Artificial intelligence has the ability to interpret the world around us. It can use sensors. Perhaps it's microphones, perhaps it's cameras, or perhaps it's more complicated sensors like strain sensors inside concrete, or stress sensors on a bridge, or even understanding the ways humans are behaving in a space. AI can actually use all of those sensors to start interpreting them and create an understanding, a more nuanced understanding of what's going on in that environment. This was very difficult, even 10 years ago. It was very, very difficult to create computer algorithms that could do those sorts of things.
So if you take for example something like human behavior, we can actually start creating buildings where the buildings actually understand how humans behave in that building. They can understand how they change the air conditioning during the day, and the temperature of the building. How do people feel inside the building? Where do people congregate? How does it flow? What is the timing of usage of that building? If you can begin to understand all of that and actually pull it together, it means the next building you create, or even improvements to the current building can be better because the system now understands more about: How is that building actually being used? There's a digital understanding of this.
This is not just limited to buildings, of course. This could be literally any product out there. And this is the consequence of bringing the digital and physical together, is that it creates this feedback loop between what gets created in the world and what is about to be created next time. And the digital understanding of that can constantly improve those outcomes.
Laurel: That's an amazing outlook. Mike, thank you so much for joining us today on what's been a fantastic conversation on The Business Lab.
Mike: You're very welcome, Laurel. It was super fun. Thank you.
Laurel: That was Mike Haley, vice president of research at Autodesk, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That's it for this episode of Business Lab. I'm your host, Laurel Ruma. I'm the director of insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.
This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.
Click here to learn how Autodesk partners with customers across industries to imagine bigger, collaborate smarter, move faster, and build better.
This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Reviews editorial staff.
Read more:
Sustainability starts in the design process, and AI can help - MIT Technology Review
Posted in Ai
Comments Off on Sustainability starts in the design process, and AI can help – MIT Technology Review
Harnessing data analytics and AI-powered decision-making for supply chain resiliency – Automotive News
Posted: at 10:34 am
Traditionally, the most difficult part of mapping a supply chain is identifying all the different silos and sidings where pertinent information may be stored.
With automotive, you could spend your entire life trying to model the entire supply chain and all the global implications, says Jafaar Beydoun, sales director at software firm o9 Solutions. For something like EV batteries, you could get all the way to lithium mining. Its possible to model all those relationships in time, he says, but its better for companies to focus first on their most vital partners and suppliers.
One of our clients was using our software for supply-and-demand predictions, but they realized that without getting the suppliers directly involved, their information was always outdated, Beydoun says. To get real-time information in one place, they figured out the most critical and highest-spend suppliers and first integrated the ones they worked best with first, then adopted suppliers farther down the list later on.
Mapping this way is iterative and after perfecting the onboarding and data exchange processes with the first set of 15 suppliers, its easier to extend it to the next group of 10 or 15, all the way down to the smallest suppliers. As an example, Beydoun cites a small supplier in Bangladesh that did not have a reliable internet connection, but which could upload a single weekly data sheet to inform forecasting models.
This OEM reached almost total supply-chain visibility in two years with this phased approach, about half the time it would have taken if theyd tried to implement all vendors at once, he says.Both o9 Solutions and Palantir use Amazon Web Services (AWS) as their cloud-computing platform, which allows them to spin up computing power as needed. But AWSs expertise comes into play in other ways, too.
Many organizations are reconfiguring their IT needs around the platforms that Amazon Web Services provides, says Manish Govil, the companys supply chain global segment leader. We have the ability to gather and organize data from disparate systems, such as point-of-sale, ERP and Internet of Things (IoT) devices. AWS data ingestion, transmission and storage pipeline provides the capability to stitch together data from those disparate systems for end-to-end visibility.
The company also has plenty of expertise managing its own supply chain, with many partners who have highly specialized capabilities in different areas of the supply chain. We understand demand-shaping, sensing and planning, transportation management, real-time transportation visibility and warehouse management, Govil says. There are a lot of organizations that can provide one or two of these areas of expertise, but we have a very extensive ecosystem that brings them all together.
Along with Apple, Proctor & Gamble, McDonalds, and Unilever, Amazon is one of five companies deemed a supply-chain master by consulting firm Gartner, best known for its annual list of the top 25 supply-chain management companies. That experience informs how AWS helps other companies build similar types of systems, Govil says. There are already business networks that have built the connective tissue for supply-chain visibility through AWS.
While the data is owned by the individual companies, this connectivity allows very specific data to be shared from many disparate players far faster than any conventional data-gathering process and speed is of the essence.
The rest is here:
Posted in Ai
Comments Off on Harnessing data analytics and AI-powered decision-making for supply chain resiliency – Automotive News
Aidoc partners with Novant Health, providing imaging AI to expedite treatment for patients in the emergency department – Yahoo Finance
Posted: at 10:34 am
Novant Health's integration of Aidoc's AI solutions, amid the latest wave of the Omicron variant, expands upon its existing slate of innovative technologies designed to improve delivery of patient care and outcomes
NEW YORK, Jan. 24, 2022 /PRNewswire/ -- Aidoc, the leading provider of enterprise-grade AI solutions for medical imaging, announces a partnership with Novant Health, a health network of over 1,800 physicians with 15 medical centers across three states. By incorporating Aidoc's AI platform, which includes seven FDA-cleared solutions for triage and notification of patients with acute medical conditions, Novant Health is taking proactive steps to improve patient outcomes and reduce emergency department (ED) length of stay amid resource constraints inflicted by the Omicron variant.
Aidoc_Logo
With a dedication to digital transformation for improving workflow efficiencies and patient outcomes, Novant Health is one of the first health networks in North Carolina to adopt Aidoc's AI platform. Novant Health has integrated multiple technologies and has been recognized by the College of Healthcare Information Management Executives' (CHIME) "Digital Health Most Wired" program five years in a row for effectively applying "core and advanced technologies into their clinical and business programs to improve health and care in their communities."
"When diagnosing and treating critical pathologies like pulmonary emboli and hemorrhagic strokes, every second counts," said Dr. Eric Eskioglu, Executive Vice President Chief Medical and Scientific Officer, Novant Health. "We are thrilled to partner with Aidoc to bring yet another leading-edge AI-technology to Novant Health. For years, we've been committed to harnessing innovative technologies to improve patient safety and outcomes through the Novant Health Institute of Innovation and Artificial Intelligence. With Aidoc's technology, our physicians will be able to more quickly identify and prioritize these patients and provide rapid life-saving treatments."
Story continues
From Aidoc's AI platform, Novant Health will be utilizing the intracranial hemorrhage (brain bleeds), pulmonary embolism (lung blood clots), incidental pulmonary embolism, c-spine fracture, and abdominal free air AI solutions. In one example, a study conducted by the Yale New-Haven Health System found that Aidoc's intracranial hemorrhage AI solution was able to reduce ED length of stay by approximately one hour.
"With rapidly rising numbers of people infected with the highly contagious Omicron variant, we can see the hard impact on hospital emergency room capacities and resources across the U.S.," says Elad Walach, CEO and co-founder of Aidoc. "We're proud to partner with a leading, innovative hospital network like Novant Health, which serves a large portion of the population in the three states its facilities are located in. Together, through our AI solutions and their state-of-the-art facilities, we will enable radiologists and related hospital providers to expedite care for tens of thousands of patients, contributing toward a mitigation of the current emergency room situations and setting an example for integrating innovation during turbulent and non-turbulent periods."
About Aidoc
Aidoc delivers the most comprehensive and widely-used portfolio of AI solutions, supporting providers by flagging patients with suspected acute conditions in real-time, expediting patient treatment and improving quality of care. Aidoc's healthcare AI platform is currently used by thousands of physicians in hospitals and radiology groups worldwide and across multiple care coordination service lines, having analyzed over 10.3 million scans in the past year. For more information, visit http://www.aidoc.com.
About Novant Health
Novant Health is an integrated network of physician clinics, outpatient facilities and hospitals that delivers a seamless and convenient healthcare experience to communities in North Carolina, South Carolina, and Georgia. The Novant Health network consists of more than 1,800 physicians and over 35,000 employees who provide care at nearly 800 locations, including 15 hospitals and hundreds of outpatient facilities and physician clinics. In 2021, Novant Health was the highest-ranking healthcare system in North Carolina to be included on Forbes' Best Employers for Diversity list. Diversity MBA Magazine ranked Novant Health first in the nation on its 2021 list of "Best Places for Women & Diverse Managers to Work." In 2020, Novant Health provided more than $1.02 billion in community benefit, including financial assistance and services.
For more information, please visit our website at NovantHealth.org. You can also follow us on Twitter and Facebook.
Ariella ShohamVP Marketing ariella@aidoc.com
Cision
View original content:https://www.prnewswire.com/news-releases/aidoc-partners-with-novant-health-providing-imaging-ai-to-expedite-treatment-for-patients-in-the-emergency-department-301466537.html
SOURCE Aidoc
View original post here:
Posted in Ai
Comments Off on Aidoc partners with Novant Health, providing imaging AI to expedite treatment for patients in the emergency department – Yahoo Finance
Hardware accelerators in the world of Perception AI – Analytics India Magazine
Posted: at 10:34 am
Perception systems can be defined as a machine or Edge device, which has embedded advanced intelligence, which can perceive its surroundings, taking meaningful abstractions out of it and allow itself to take some decisions in real-time, said Pradeep Sukumaran, VP, AI&Cloud at Ignitarium, at the Machine Learning Developers Summit (MLDS) in his talk titled Hardware Accelerators in the World of Perception AI.
The key components of perception system AI include sensing systems like camera, Lidar, Radar, and microphone.
Pradeep says, Looking at the cost and power parameters, and now with the advent of Deep Learning, which is a subset of ML, and availability of some very interesting hardware options, I think this has opened up the use of Deep Learning. In some cases, completely replacing the traditional signal processing algorithms going way beyond what was done earlier, in terms of the amount of data it can process and also in some cases, there is a combination of traditional signal processing with Deep Learning.
Perception AI: Use cases
Automotive and Robotics
Sensors guide a truck from source to destination on dedicated lanes in the trucking industry. There are also lower end-use cases like robotics which can be used for services or delivery where the robots use sensors to understand their surroundings to find their way around.
Predictive maintenance
Companies use vibration sensors attached to motors to understand specific signatures. And these are typically done using ESP pattern recognition, but now they are being replaced by ML and Deep Learning which can be used by low power hardware.
Surveillance
Surveillance is done with a combination of Deep Learning and specialises hardware in the multimodal use cases. Now, there are multiple sensors with audio and video combined, trying to get information from the surroundings. The 2D cameras with 3D LiDARS can be used in traffic junctions to monitor vehicles and pedestrian movement. Sometimes, the 2D cameras miss out on many images due to excessive light or rain or environmental conditions obstructing standard cameras. 3D LiDAR can actually detect objects in such conditions and use a combination of these two to get the traffic pattern for a more intelligent traffic management system.
Medical equipment
The medical field is also using Deep Learning and FPGAs specifically for smart surgery, smart surgical equipment etc.
Edge AI
General Purpose Hardware like the CPU, DSP , GPU is strapped to DNN Engine.
Deep learning models require specific hardware to run it efficiently. These are called DNN engines. So they are strapping these to the CPUs and DSPs and the GPUs, basically allowing the CPUs to offload some of the work to these engines that are tightly coupled to the same chip. The general-purpose hardware is now getting variance and is tuned for AI.
FPGAs are programmable devices, and the companies providing FPGAs want to enable AI in their key applications across industries. They want to get high performance with low power where you can write the code, burn it in the FPGA, and design it on the field. The trade off is the lack of software developer friendliness. The developers have to use hardware to implement neural nets. However, companies are building tools and SDKs that make it easier, but still its a long way to go.
ASICs are basically application specific integrated circuits specifically designed for AI workloads.
See original here:
Hardware accelerators in the world of Perception AI - Analytics India Magazine
Posted in Ai
Comments Off on Hardware accelerators in the world of Perception AI – Analytics India Magazine