Page 73«..1020..72737475..8090..»

Category Archives: Ai

Verte Releases AI-enabled Omnichannel Supply Chain Platform for 3PLs and Wholesalers – Business Wire

Posted: January 24, 2022 at 10:35 am

ATLANTA--(BUSINESS WIRE)--Verte, a leader in supply chain visibility, today announced the launch of new AI capabilities on its cloud supply chain platform. Artificial intelligence is transformational to the supply chain, and in the current e-commerce landscape, a platform with seamless, real-time connectivity is mandatory for all organizations.

Vertes technology solutions enable real-time supply chain visibility with a data-driven platform that centralizes information to increase transparency into the supply chain. Verte is uniquely positioned to provide aggregated data solutions that solve key industry challenges involving inventory and customer fulfillment, enabling retailers, shippers, 3PLs, and carriers to operate more effectively. It is the ultimate advanced connector to demand channels, available and in-transit merchandise, carriers, service providers, and physical locations of any kind.

The platforms AI prescriptive solution can be applied to the planning, manufacturing, and positioning of inventory across a complex retail fulfillment network using Vertes unified data-first approach. Vertes AI inventory management solutions analyze consumer fulfillment choices and shopping behaviors, improving e-commerce and store inventory levels. Any entity may subscribe to the service. Your operations, partner providers, merchants, service providers, and vendors can all participate and optimize.

We strongly believe that AI technology is going to have an enormous and beneficial impact on the e-commerce and supply chain industry in the coming years, which is why we are constantly advancing our AI capabilities. AI-driven solutions will be essential to e-commerce success. Organizations must leverage management tools with predictive analysis capabilities such as our new AI-enabled omnichannel platform, to improve the customer journey, said Shlomi Amouyal, Vertes co-founder and chief technology officer.

Sharing and receiving available-to-promise inventory with and from specific suppliers requires access control to reduce lead times and gain multi-enterprise visibility into each source. In addition, running everything under one cloud platform ensures that each fulfillment decision is aligned with the business priorities, whether that entails cost or time.

Vertes multi-cloud platform is built with distributed, microservices-driven disruptive architecture for current and future supply chain needs. Additionally, the platform enables 6X faster onboarding and 5X times speedier volume management. Features include dynamic integration for multi-channel order capture (for e-commerce, wholesale, and aggregation clients) and ready-to-start EDI Integration.

The platform supports real-time progress visibility across containers and order movement across demand channels, as well as a network facility with multi-dimensional analysis of customer preferences and experience. Additional assets include distributed dynamic network fulfillment execution and prescriptive dynamic replenishment.

Organizations can simplify their order compliance with flexible, configurable templates (50+ pre-built templates, from labels to BOLs) and obtain inventory reclassification in real-time through virtual inventory segmentation across channel sales. In addition, to optimize inventory movement and picking, the Verte platform offers multiple units of measure handling of inventory across demand channels and configurable, scalable, and load-balanced execution options through robotics and manual support.

Vertes new AI features also provide efficient route planning and shipments build-up, with adaptive changes to service level as needed. To manage and execute business delivery, organizations can utilize multi-mode last-mile delivery management, data management, and reporting analytics with transparency and traceability enabled through blockchain. The platform allows for automated billing and invoicing with support for transactional and value-added services, in addition to centralized returns management for omnichannel returns (store returns and/or e-commerce returns).

With Verte's machine-learning technology helping predict supply chain outcomes, sellers will be able to forecast their ability to meet a goal with a retailer or end-customer. Moreover, buyers will be able to make plans based on shipment timings, said Padhu Raman, Vertes co-founder and chief product officer.

About Verte:

Verte is an AI cloud-based supply platform provider that connects, unifies, and automates commerce operations, powering retailers to sell wherever their customers are and focus on scalable growth.

Were disrupting digital commerce with innovation and smart tools to help retailers compete without having to work harder. We deliver a cloud operating system that offers speed, flexibility, and intelligence to partners, providing one of the most advanced 3PL systems. We manage all back-end eCommerce operations in one place with a network of tech-enabled warehouses, inventory management software, and product tracking - underpinned by AI.

Here is the original post:

Verte Releases AI-enabled Omnichannel Supply Chain Platform for 3PLs and Wholesalers - Business Wire

Posted in Ai | Comments Off on Verte Releases AI-enabled Omnichannel Supply Chain Platform for 3PLs and Wholesalers – Business Wire

RDS and Trust Aware Process Mining: Keys to Trustworthy AI? – Techopedia

Posted: at 10:35 am

By 2024, companies are predicted to spend $500 billion annually on artificial intelligence (AI), according to the International Data Corporation (IDC).

This forecast has broad socio-economic implications because, for businesses, AI is transformativeaccording to a recent McKinsey study, organizations implementing AI-based applications are expected to increase cash flow 120% by 2030.

But implementing AI comes with unique challenges. For consumers, for example, AI can amplify and perpetuate pre-existing biasesand do so at scale. Cathy ONeil, a leading advocate for AI algorithmic fairness, highlighted three adverse impacts of AI on consumers:

In fact, a PEW survey found that 58% of Americans believe AI programs amplify some level of bias, revealing an undercurrent of skepticism about AIs trustworthiness. Concerns relating to AI fairness cut across facial recognition, criminal justice, hiring practices and loan approvalswhere AI algorithms have proven to produce adverse outcomes, disproportionately impacting marginalized groups.

But what can be deemed as fairas fairness is the foundation of trustworthy AI? For businesses, that is the million-dollar question.

AI's ever-increasing growth highlights the vital importance of balancing its utility with the fairness of its outcomes, thereby creating a culture of trustworthy AI.

Intuitively, fairness seems like a simple concept: Fairness is closely related to fair play, where everybody is treated in a similar way. However, fairness embodies several dimensions, such as trade-offs between algorithmic accuracy versus human values, demographic parity versus policy outcomes and fundamental, power-focused questions such as who gets to decide what is fair.

There are five challenges associated with contextualizing and applying fairness in AI systems:

In other words, what may be considered fair in one culture may be perceived as unfair in another.

For instance, in the legal context, fairness means due process and the rule of law by which disputes are resolved with a degree of certainty. Fairness, in this context, is not necessarily about decision outcomesbut about the process by which decision-makers reach those outcomes (and how closely that process adheres to accepted legal standards).

There are, however, other instances where application of corrective fairness is necessary. For example, to remedy discriminatory practices in lending, housing, education, and employment, fairness is less about treating everyone equally and more about affirmative action. Thus, recruiting a team to deploy an AI rollout can prove a challenge in terms of fairness and diversity. (Also read: 5 Crucial Skills That Are Needed For Successful AI Deployments.)

Equality is considered to be a fundamental human rightno one should be discriminated against on the basis of race, gender, nationality, disability or sexual orientation. While the law protects against disparate treatmentwhen individuals in a protected class are treated differently on purposeAI algorithms may still produce outcomes of disparate impactwhen variables, which are on-their-face bias-neutral, cause unintentional discrimination.

To illustrate how disparate impact occurs, consider Amazons same-day delivery service. It's based on an AI algorithm which uses attributessuch as distance to the nearest fulfillment center, local demand in designated ZIP code areas and frequency distribution of prime membersto determine profitable locations for free same-day delivery. Amazon's same-day delivery service was also found to be biased against people of coloureven though race was not a factor in the AI algorithm. How? The algorithm was less likely to deem ZIP codes predominantly occupied by people of colour as advantageous locations to offer the service. (Also read: Can AI Have Biases?)

Group fairness' ambition is to ensure AI algorithmic outcomes do not discriminate against members of protected groups based on demographics, gender or race. For example, in the context of credit applications, everyone ought to have equal probability of being assigned a good credit score, resulting in predictive parity, regardless of demographic variables.

On the other hand, AI algorithms focused on individual fairness strive to create outcomes which are consistent for individuals with similar attributes. Put differently, the model ought to treat similar cases in a similar way.

In this context, fairness encompasses policy and legal considerations and leads us to ask, What exactly is fair?

For example, in the context of hiring practices, what ought to be a fair percentage of women in management positions? In other words, what percentage should AI algorithms incorporate as thresholds to promote gender parity? (Also read: How Technology Is Helping Companies Achieve Their DEI Goals in 2022.)

Before we can decide what is fair, we need to decide who gets to decide that. And, as it stands, the definition of fairness is simply what those already in power need it to be to maintain that power.

As there are many interpretations of fairness, data scientists need to consider incorporating fairness constraints in the context of specific use cases and for desired outcomes. Responsible Data Science (RDS) is a discipline influential in shaping best practices for trustworthy AI and which facilitates AI fairness.

RDS delivers a robust framework for the ethical design of AI systems that addresses the following key areas:

While RDS provides the foundation for instituting ethical AI design, organizations are challenged to look into how such complex fairness considerations are implemented and, when necessary, remedied. Doing so will help them mitigate potential compliance and reputational risks, particularly as the momentum for AI regulation is accelerating.

Conformance obligations to AI regulatory frameworks are inherently fragmentedspanning across data governance, conformance testing, quality assurance of AI model behaviors, transparency, accountability, and confidentiality process activities. These processes involve multiple steps across disparate systems, hand-offs, re-works, and human-in-the-loop oversight between multiple stakeholders: IT, legal, compliance, security and customer service teams.

Process mining is a rapidly growing field which provides a data-driven approach for discovering how existing AI compliance processes work across diverse process participants and disparate systems of record. It is a data science discipline that supports in-depth analysis of how current processes work and identifies process variances, bottlenecks and surface areas for process optimization.

R&D teams, who are responsible for the development, integration, deployment, and support of AI systems, including data governance and implementation of appropriate algorithmic fairness constraints.

Legal and compliance teams, who are responsible for instituting best practices and processes to ensure adherence to AI accountability and transparency provisions; and

Customer-facing functions, who provide clarity for customers and consumers regarding the expected AI system inputs and outputs.

By visualizing compliance process execution tasks relating to AI training datasuch as gathering, labeling, applying fairness constraints and data governance processes.

By discovering record-keeping and documentation process execution steps associated with data governance processes and identifying potential root causes for improper AI system execution.

By analyzing AI transparency processes, ensuring they accurately interpret AI system outputs and provide clear information for users to trust the results.

By examining human-in-the-loop interactions and actions taken in the event of actual anomalies in AI systems' performance.

By monitoring, in real time, to identify processes deviating from requirements and trigger alerts in the event of non-compliant process tasks or condition changes.

Trust aware process mining can be an important tool to support the development of rigorous AI compliance best practices that mitigate against unfair AI outcomes.

That's importantbecause AI adoption will largely depend on developing a culture of trustworthy AI. A Capgemini Research Institute study reinforces the importance of establishing consumer confidence in AI: Nearly 50% of survey respondents have experienced what they perceive as unfair outcomes relating to the use of AI systems, 73% expect improved transparency and 76% believe in the importance of AI regulation.

At the same time, effective AI governance results in increased brand loyalty and in repeat business. Instituting trustworthy AI best practices and governance is good business. It engenders confidence and sustainable competitive advantages.

Author and trust expert Rachel Botsman said it best when she described trust as, the remarkable force that pulls you over that gap between certainty and uncertainty; the bridge between the known and the unknown.

Visit link:

RDS and Trust Aware Process Mining: Keys to Trustworthy AI? - Techopedia

Posted in Ai | Comments Off on RDS and Trust Aware Process Mining: Keys to Trustworthy AI? – Techopedia

Sustainability starts in the design process, and AI can help – MIT Technology Review

Posted: at 10:35 am

Artificial intelligence helps build physical infrastructure like modular housing, skyscrapers, and factory floors. many problems that we wrestle with in all forms of engineering and design are very, very complex problemsthose problems are beginning to reach the limits of human capacity, says Mike Haley, the vice president of research at Autodesk. But theres hope with AI capabilities, Haley continues This is a place where AI and humans come together very nicely because AI can actually take certain very complex problems in the world and recast them.

And where AI and humans come together is at the start of the process with generative design, which incorporates AI into the design process to explore solutions and ideas that a human alone might not have even considered. You really want to be able to look at the entire lifecycle of producing something and ask yourself, How can I produce this by using the least amount of energy throughout? This kind of thinking will reduce the impact of, not just construction, but any sort of product creation on the planet.

The symbiotic human-computer relationship behind generative design is necessary to solve those very complex problemsincluding sustainability. We are not going to have a sustainable society until we learn to build productsfrom mobile phones to buildings to large pieces of infrastructurethat survive the long-term, Haley notes.

The key, he says, is to start in the earliest stages of the design process. Decisions that affect sustainability happen in the conceptual phase, when you're imagining what you're going to create. He continues, If you can begin to put features into software, into decision-making systems, early on, they can guide designers toward more sustainable solutions by affecting them at this early stage.

Using generative design will result in malleable solutions that anticipate future needs or requirements to avoid having to build new solutions, products, or infrastructure. What if a building that was built for one purpose, when it needed to be turned into a different kind of building, wasn't destroyed, but it was just tweaked slightly?

Thats the real opportunity herecreating a relationship between humans and computers will be foundational to the future of design. The consequence of bringing the digital and physical together, Haley says, is that it creates a feedback loop between what gets created in the world and what is about to be created next time.

"What is Generative Design, and How Can It Be Used in Manufacturing?" by Dan Miles, Redshift by Autodesk, November 19, 2021

"4 Ways AI in Architecture and Construction Can Empower Building Projects" by Zach Mortice, Redshift by Autodesk, April 22, 2021

Laurel Ruma: From MIT Technology Review, I'm Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is about how to design better with artificial intelligence, everything from modular housing to skyscrapers to manufactured products and factory floors can be designed with and benefit from AI and machine learning technologies. As artificial intelligence helps humans with design options, how can it help us build smarter? Two words for you: sustainable design.

My guest is Mike Haley, the vice president of research at Autodesk. Mike leads a team of researchers, engineers, and other specialists who are exploring the future of design and making.

This episode of Business Lab is produced in association with Autodesk.

Welcome, Mike.

Mike Haley: Hi Laurel. Thanks for having me.

Laurel: So for those who don't know, Autodesk technology supports architecture, engineering, construction, product design, manufacturing, as well as media and entertainment industries. And we'll be talking about that kind of design and artificial intelligence today. But one specific aspect of it is generative design. What is generative design? And how does it lend itself to an AI-human collaboration?

Mike: So Laurel, to answer that, first you have to ask yourself: What is design? When designers are approaching a problem, they're generally looking at the problem through a number of constraints, so if you're building a building, there's a certain amount of land you have, for example. And you're also trying to improve or optimize something. So perhaps you're trying to build the building with a very low cost, or have low environmental impact, or support as many people as possible. So you've got this simultaneous problem of dealing with your constraints, and then trying to maximize these various design factors.

That is really the essence of any design problem. The history of design is that it is entirely a human problem. Humans may use tools. Those tools may be pens and pencils, they may be calculators, and they may be computers to solve that. But really, the essence of solving that problem lies purely within the human mind. Generative design is the first time we're producing technology that is using the computational capacity of the computer to assist us in that process, to help us go beyond perhaps where our usual considerations go.

As you and I'm sure most of the audience know, people talk a lot about bias in AI algorithms, but bias generally comes from the data those algorithms see, and the bias in that data generally comes from humans, so we are actually very, very biased. This shows up in design as well. The advantage of using computational assistance is you can introduce very advanced forms of AI that are not actually based on data. They're based on algorithmic or physical understandings of the world, so that when you're trying to understand that building, or design an airplane, or design a bicycle, or what it might be, it can actually use things like the laws of physics, for example, to understand the full spread of possible solutions to address that design problem I just talked about.

So in some ways, you can think of generative design as a computer technology that allows designers to expand their minds and to explore spaces and possibilities of solutions that they perhaps wouldn't go otherwise. And it might even be outside of their traditional comfort zone, so biases might prevent them from going there. One thing you find with generative design is when we watch people use this technology, they tend to use it in an iterative fashion. They will supply the problem to the computer, let the computer propose some solutions, and then they will look at those solutions and then begin to adjust their criteria and run it again. This is almost this symbiotic kind of relationship that forms between the human and the computer. And I really enjoy that because the human mind is not very good at computing. The popular idea is you can hold seven facts in your head at once, which is a lot smaller than the computer, right?

But human minds are excellent at responding and evaluating situations and bringing in a very broad set of considerations. That in fact is the essence of creativity. So if you bring that all together and look at that entire process, that is really what generative design is all about.

Laurel: So really what you're talking about is the relationship between a human and a computer. And the output of this relationship is something that's better than either one could do by themselves.

Mike: Yes, that's right. Exactly. I mean, humans have a set of limitations, and we have a set of skills that we bring together really when we're being creative. The same is true of a computer. The computer has certain things like computation, for example, and understanding the laws of physics and things like that. But it's far better than we are. But it's also highly limited in being able to evaluate the efficacy of a solution. So generative is really about bringing those two things together.

Laurel: So there's been a lot of discussion about how AI and automation replacing workers is a fear. What is the AI human collaboration that you're envisioning for the future of work? How can this partnership continue?

Mike: There's an incredibly interesting relationship between AI and actually not just solving problems in the world together with humans, but also improving the human condition. So when we talk about the tension between AI and human work, I really like to look at it through that lens, so that when we think of AI learning the world, learning how to do things, that can lead to something like automation. Those learningsthose digital learningscan drive things like a robot, or a machine in a factory, or a machine in a construction site, or even just a computer algorithm that can decide on something for you.

That can be powerful if managed appropriately. Of course, you've always got the risks of bias and unfairness and those kinds of things that you have to be aware of. But there's another effect of AI learning: it is now able to also better understand what a human being is doing. So imagine an AI that watches you type in a word processor, for example. And it watches you type for many, many years. It learns things about your writing style. Now one of the obvious automation things it can do is begin to make suggestions for your writing, which is fine. We're beginning to see that today already. But something it could also do is actually begin to evaluate your writing and actually understand, maybe in a very nuanced way, how you compare to other writers. So perhaps you're writing a kind of fiction, and it's saying, "Well, generally in this realm of fiction, people that write like you are targeting these sorts of audiences. And maybe you want to consider this kind of tone, or nature of your writing."

In doing that, the AI is actually providing more tuned ways of teaching you as a human being through interpretation of your actions and working again in a really iterative way with a person to guide them to improve their own capability. So this is not about automating the problem. It's actually in some ironic way, automating the process of training a person and improving their skills. So we really like to put that lens on AI and look at that way in that, yes, we are automating a lot of tasks, but we can also use that same technology to help humans develop skills and improve their own capacity.

The other thing I will mention in this space is that many problems that we wrestle with in all forms of engineering and design are very, very complex, and we're talking about some of them right now. Those problems are beginning to reach the limits of human capacity. We have to start simplifying them in some ways. This is a place where AI and humans come together very nicely because AI can actually take certain very complex problems in the world and recast them. They can be recast or reinterpreted into language or sub problems that human beings can actually understand, that we can wrestle with and provide answers. And then the AI can take those answers back and provide a better solution to whatever problem we happen to be wrestling with at that time.

Laurel: So speaking of some of those really difficult problems, climate change, sustainability, that's certainly one of those. And you actually wrote, and here's a quote from your piece, quote, "Products need to improve in quality because an outmoded throw-away society is not acceptable in the long-term." So you're saying here that AI can help with those types of big societal problems too.

Mike: Yeah, exactly. This is exactly the kind of difficult problem that I was just talking about.For example, how many people get a new smartphone, and within a year or two, you're tossing it to get your new one? And this is becoming part of just the way we live. And we are not going to have a sustainable society until we actually learn to build products, and products can be anything from a mobile phone to a building, or large pieces of infrastructure, that survive long-term.

Now what happens in the long-term? Generally, requirements change. The power of things change. People's reaction to that, again, like I just said, is to throw them away and create something new. But what if those things were amenable to change in some ways? What if they could be partially recreated halfway through their lifespan? What if a building that was built for one purpose, when it needed to be turned into a different kind of building, wasn't destroyed, but it was just tweaked slightly? Because when the designer first designed that building, there was a way to contemplate what all future users of that building could be. What are the patterns of those? And how could that building be designed in such a way to support those future uses?

So, solving that kind of design problem, solving a problem where you're not just solving your current problem, but you're trying to solve all the future problems in some ways is a very, very difficult problem. And it was the kind of problem I was talking about earlier on. We really need a computer to help you think through that. In design terms, this is what we call a systems problem because there's multiple systems you need to think of, a system of time, a system of society, of economy, of all sorts of things around it you need to think through. And the only way to think through that is with an AI system or a computational system being your assistant through that process.

Laurel: I have to say that's a bit mind bending, to think about all the possible iterations of a building, or an aircraft carrier, or even a cell phone. But that sort of focus on sustainability certainly changes how products and skyscrapers and factory floors are designed. What else is possible with AI and machine learning with sustainability?

Mike: We tend to think normally along three axes. So one of the key issues right now we're all aware of is climate change, which is rooted in carbon. And many, many practices in the world involve the production of enormous amounts of carbon or what we call retained carbon. So if we're producing concrete, you're producing extra carbon in the atmosphere. So we could begin to design buildings, or products, or whatever it might be, that either use less carbon in the production of the materials, or in the creation of the structures themselves, or in the best case, even use things that have negative carbon.

For example, using a large amount of timber in a building can actually reduce overall carbon usage because at the lifetime that tree was growing, it consumed carbon. It embodied the carbon inside the atmosphere into itself. And now you've used it. You've trapped it essentially inside the wood, and you've placed that into the building. You didn't create new carbon as a result of producing the wood. Embodied energy is something else we think of too. In creating anything in the world, there is energy that is going to go into that. That energy might be driving a factory, but that energy could be shipping products or raw materials across the world. You really want to be able to look at the entire lifecycle of producing something and ask yourself, "How can I produce this by using the least amount of energy throughout?" And you will have a lower impact on the planet.

The final example is waste. This is a very significant area for AI to have an effect because waste in some ways is about a design that is not optimal. When you're producing waste from something, it means there are pieces you don't need. There's material you don't need. There's something coming out of this which is obviously being discarded. It is often possible to use AI to evaluate those designs in such a way to minimize those waste-ages, and then also produce automations, like for example, a robot saw that can cut wood for a building, or timber framing in a building, that knows the amount of wood you have. It knows where each piece is going to go. And it's kind of cutting the wood so that it's sure that it's going to produce as little off cuts that are going to be thrown away as possible. Something like that can actually have a significant effect at the end of the day.

Laurel: You mentioned earlier AI could help, for example, something writing, and how folks write and their styles, etc. But also, understanding systems and how systems work is also really important. So how could AI and ML be applied to education? And how does that affect students and teaching in general?

Mike: One of the areas that I'm very passionate about where generative design and learning come together is around a term that we've been playing around with for a while in all of this research, which is this idea of generative learning, which is learning for you. a little bit along the lines of some of the stuff we talked about before, where you're almost looking at the human as part of a loop together with the computer. The computer understands what you're trying to do. It's learning more about how you compare to others, perhaps where you could improve in your own proficiencies. And then it's guiding you in those directions. Perhaps it's giving you challenges that specifically push you on those. Perhaps it's giving you directions. Perhaps it's connecting you with others that can actually help improve you.

Like I said, we think of that as sort of a generative learning. What you're trying to optimize here is not a design, like what we talked about before, but we're trying to optimize your learning. We're trying to optimize your skillset. Also, I think underlying a lot of this as well is a shift in a paradigm. Up until fairly recently, computers were really just seen as a big calculator. Right? Certainly in design, even in our software here at Autodesk. I mean, the software was typically used to explore a design or to document a design. The software wasn't used to actually calculate every aspect of the design. It was used really in some ways as a very complex kind of drafting board, in some sense.

This is changing now with technologies like generative design, where you really are, like I talked about earlier, working in the loop with the computer. So the computer is suggesting things to you. It's pushing you as a designer. And you as a designer are also somewhat of a curator now. You're reacting to things that the computer is suggesting or providing to you. So embracing this paradigm early on in education, with the students coming into design and engineering today, is really, really important. I think that they have an opportunity to take the fields of design and engineering to entirely different levels as the result of being able to use these new capabilities effectively.

Laurel: Another place that this has to be also applied is the workplace. So employees and companies have to understand that the technology will also change the way that they work. So what tools are available to navigate our evolving workplace?

Mike: Automation can have a lot of unintended side effects in a workplace. So one of the first things any company has to do is really wrestle with that. You have to be very, very real about what's the effect on your workforce. If automation is going to be making decisions, what's the risk that those decisions might be unfair or biased in some ways? One of the things that you have to understand is that this is not just a plug it in, switch it on, and everything's going to work. You have to even involve your workforce right from the beginning in those decisions around automation. We see this in our own industry, the companies that are the most successful in adopting automation are the ones that are listening the most closely to their workforce at the same time.

It's not that they're not doing automation, but they're actually rolling it out in a way that's commensurate with the workforce, and there's a certain amount of openness in that process. I think the other aspect that I like to look at from a changing work environment is the ability to focus our time as human beings on what really matters, and not have to deal with so much tedium in our lives. So much of our time using a computer is tedious. You're trying to find the right application. You're trying to get help on something. You're trying to work around some little thing that you don't understand in the software.

Those kinds of things are beginning to fall away with AI and automation. And as they do, we've still got a fair way to go on that. But as we go further down the line on that, what it means is that creative people can spend more time being creative. They can focus on the essence of a problem. So if you're an architect who is laying out desks in an office space, you're probably not being paid to actually lay out every desk. You're being paid to design a space. So what if you design the space and the computer actually helps with the actual physical desk layout? Because that's a pretty simple thing to kind of automate. I think there's a really fundamental change in where people will be spending their time and how they'll be actually spending their time.

Laurel: And that kind of comes back to a topic we just talked about, which is AI and ethics. How do companies embrace ethics with innovation in mind when they are thinking about these artificial intelligence opportunities?

Mike: This is something that's incredibly important in all of our industries. We're seeing this rise, the awareness of this rise, obviously it's there in the popular society right now. But we've been looking at this for a while, and a couple of learnings I can give you straight off the bat is any company that's dealing with automation and AI needs to ensure that they have support for an ethical approach to this right from the very top of the company because the ethical decisions don't just sit at the technical level, they sit at all levels of decision making. They're going to be business decisions. They're going to be market decisions. They're going to be production decisions, investment decisions and technology decisions. So you have to make sure that it's understood within any corporate or industrial environment.

Next is that everybody has to be aligned internally to those organizations on: What does ethics actually mean? Ethics is a term that's used pretty broadly. But when it actually gets down to doing something about it, and understanding if you're being successful at it, it's very important to be quite precise on it. This brings me to the third point, that if you are going to announce, if you've done that, and you now have an understanding of what it is, you now need to make sure that you're solving a concrete problem because ethics can be a very, very fuzzy topic. You can do ethics washing very, very easily in an organization.

And if you don't quickly address that and actually define a very specific problem, it will continue to be fuzzy, and it will never have the effect that you would like to see within a company. And the last thing I will say is you have to make it cultural. If you are not ensuring that ethical behavior is actually part of the cultural values of your organization, you're never going to truly practice it. You can put in governance structures, you can put in software systems, you can put in all sorts of things that ensure a fairly high level of ethics. But you'll never be certain that you're really doing it unless it's embedded deeply within the culture of actually how people behave within your organization.

Laurel: So when you take all of this together, what sorts of products or applications are you seeing in early development that we can expect or even look forward to in the next, say, three to five years?

Mike: There's a number of things. The first category I like to think of is the raise-all-the-boats category, which means that we are beginning to see tools that just generally make everybody more efficient at what they do, so it's similar to what I was talking about earlier on about the architect laying out desks. It could be a car designer that is designing a new car. And in most of today's cars, there's a lot of electrical wiring. Today, the designer has to route every cable through that car and show, tell the software exactly where that cable goes. That's not actually very germane to the core design of the car, but it's a necessary evil to specify the car. That can be automated.

We're beginning to see these fairly simple automations beginning to become available to all designers, all engineers, that just allow them to be a little bit more efficient, allow them to be a little bit more precise without any extra effort, so I like to think of that as the raise-all-the-boats kind of feature. The next thing, which we touched on earlier in the session, was the sustainability of solutions. It turns out that most of the key decisions that affect the sustainability of a product, or a building, or really anything, happen in the earliest stages of the design. They really happen in this very sort of conceptual phase when you're imagining what you're going to create. So if you can begin to put features into software, into decision-making systems early on, they can guide designers towards more sustainable solutions through affecting them at this early stage. That's the next thing I think we're going to see.

The other thing I'm seeing appears quite a lot already, and this is not just true in AI, but it's just generally true in the digital space, is the emergence of platforms and very flexible tools that shape to the needs of the users themselves. When I was first using a lot of software, as I'm sure many of us remember, you had one product. It always did a very specific thing, and it was the same for whoever used it. That era is ending, and we're ending up seeing tools now that are highly customizable, perhaps they're even automatically reconfiguring themselves as they understand more about what you need from them. If they understand more about what your job truly is, they will adjust to that. So I think that's the other thing we're seeing.

The final thing I'll mention is that over the next three to five years, we're going to see more about the breaking down of the barrier between digital and physical. Artificial intelligence has the ability to interpret the world around us. It can use sensors. Perhaps it's microphones, perhaps it's cameras, or perhaps it's more complicated sensors like strain sensors inside concrete, or stress sensors on a bridge, or even understanding the ways humans are behaving in a space. AI can actually use all of those sensors to start interpreting them and create an understanding, a more nuanced understanding of what's going on in that environment. This was very difficult, even 10 years ago. It was very, very difficult to create computer algorithms that could do those sorts of things.

So if you take for example something like human behavior, we can actually start creating buildings where the buildings actually understand how humans behave in that building. They can understand how they change the air conditioning during the day, and the temperature of the building. How do people feel inside the building? Where do people congregate? How does it flow? What is the timing of usage of that building? If you can begin to understand all of that and actually pull it together, it means the next building you create, or even improvements to the current building can be better because the system now understands more about: How is that building actually being used? There's a digital understanding of this.

This is not just limited to buildings, of course. This could be literally any product out there. And this is the consequence of bringing the digital and physical together, is that it creates this feedback loop between what gets created in the world and what is about to be created next time. And the digital understanding of that can constantly improve those outcomes.

Laurel: That's an amazing outlook. Mike, thank you so much for joining us today on what's been a fantastic conversation on The Business Lab.

Mike: You're very welcome, Laurel. It was super fun. Thank you.

Laurel: That was Mike Haley, vice president of research at Autodesk, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That's it for this episode of Business Lab. I'm your host, Laurel Ruma. I'm the director of insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Click here to learn how Autodesk partners with customers across industries to imagine bigger, collaborate smarter, move faster, and build better.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Reviews editorial staff.

Read more:

Sustainability starts in the design process, and AI can help - MIT Technology Review

Posted in Ai | Comments Off on Sustainability starts in the design process, and AI can help – MIT Technology Review

Harnessing data analytics and AI-powered decision-making for supply chain resiliency – Automotive News

Posted: at 10:34 am

Traditionally, the most difficult part of mapping a supply chain is identifying all the different silos and sidings where pertinent information may be stored.

With automotive, you could spend your entire life trying to model the entire supply chain and all the global implications, says Jafaar Beydoun, sales director at software firm o9 Solutions. For something like EV batteries, you could get all the way to lithium mining. Its possible to model all those relationships in time, he says, but its better for companies to focus first on their most vital partners and suppliers.

One of our clients was using our software for supply-and-demand predictions, but they realized that without getting the suppliers directly involved, their information was always outdated, Beydoun says. To get real-time information in one place, they figured out the most critical and highest-spend suppliers and first integrated the ones they worked best with first, then adopted suppliers farther down the list later on.

Mapping this way is iterative and after perfecting the onboarding and data exchange processes with the first set of 15 suppliers, its easier to extend it to the next group of 10 or 15, all the way down to the smallest suppliers. As an example, Beydoun cites a small supplier in Bangladesh that did not have a reliable internet connection, but which could upload a single weekly data sheet to inform forecasting models.

This OEM reached almost total supply-chain visibility in two years with this phased approach, about half the time it would have taken if theyd tried to implement all vendors at once, he says.Both o9 Solutions and Palantir use Amazon Web Services (AWS) as their cloud-computing platform, which allows them to spin up computing power as needed. But AWSs expertise comes into play in other ways, too.

Many organizations are reconfiguring their IT needs around the platforms that Amazon Web Services provides, says Manish Govil, the companys supply chain global segment leader. We have the ability to gather and organize data from disparate systems, such as point-of-sale, ERP and Internet of Things (IoT) devices. AWS data ingestion, transmission and storage pipeline provides the capability to stitch together data from those disparate systems for end-to-end visibility.

The company also has plenty of expertise managing its own supply chain, with many partners who have highly specialized capabilities in different areas of the supply chain. We understand demand-shaping, sensing and planning, transportation management, real-time transportation visibility and warehouse management, Govil says. There are a lot of organizations that can provide one or two of these areas of expertise, but we have a very extensive ecosystem that brings them all together.

Along with Apple, Proctor & Gamble, McDonalds, and Unilever, Amazon is one of five companies deemed a supply-chain master by consulting firm Gartner, best known for its annual list of the top 25 supply-chain management companies. That experience informs how AWS helps other companies build similar types of systems, Govil says. There are already business networks that have built the connective tissue for supply-chain visibility through AWS.

While the data is owned by the individual companies, this connectivity allows very specific data to be shared from many disparate players far faster than any conventional data-gathering process and speed is of the essence.

The rest is here:

Harnessing data analytics and AI-powered decision-making for supply chain resiliency - Automotive News

Posted in Ai | Comments Off on Harnessing data analytics and AI-powered decision-making for supply chain resiliency – Automotive News

Aidoc partners with Novant Health, providing imaging AI to expedite treatment for patients in the emergency department – Yahoo Finance

Posted: at 10:34 am

Novant Health's integration of Aidoc's AI solutions, amid the latest wave of the Omicron variant, expands upon its existing slate of innovative technologies designed to improve delivery of patient care and outcomes

NEW YORK, Jan. 24, 2022 /PRNewswire/ -- Aidoc, the leading provider of enterprise-grade AI solutions for medical imaging, announces a partnership with Novant Health, a health network of over 1,800 physicians with 15 medical centers across three states. By incorporating Aidoc's AI platform, which includes seven FDA-cleared solutions for triage and notification of patients with acute medical conditions, Novant Health is taking proactive steps to improve patient outcomes and reduce emergency department (ED) length of stay amid resource constraints inflicted by the Omicron variant.

Aidoc_Logo

With a dedication to digital transformation for improving workflow efficiencies and patient outcomes, Novant Health is one of the first health networks in North Carolina to adopt Aidoc's AI platform. Novant Health has integrated multiple technologies and has been recognized by the College of Healthcare Information Management Executives' (CHIME) "Digital Health Most Wired" program five years in a row for effectively applying "core and advanced technologies into their clinical and business programs to improve health and care in their communities."

"When diagnosing and treating critical pathologies like pulmonary emboli and hemorrhagic strokes, every second counts," said Dr. Eric Eskioglu, Executive Vice President Chief Medical and Scientific Officer, Novant Health. "We are thrilled to partner with Aidoc to bring yet another leading-edge AI-technology to Novant Health. For years, we've been committed to harnessing innovative technologies to improve patient safety and outcomes through the Novant Health Institute of Innovation and Artificial Intelligence. With Aidoc's technology, our physicians will be able to more quickly identify and prioritize these patients and provide rapid life-saving treatments."

Story continues

From Aidoc's AI platform, Novant Health will be utilizing the intracranial hemorrhage (brain bleeds), pulmonary embolism (lung blood clots), incidental pulmonary embolism, c-spine fracture, and abdominal free air AI solutions. In one example, a study conducted by the Yale New-Haven Health System found that Aidoc's intracranial hemorrhage AI solution was able to reduce ED length of stay by approximately one hour.

"With rapidly rising numbers of people infected with the highly contagious Omicron variant, we can see the hard impact on hospital emergency room capacities and resources across the U.S.," says Elad Walach, CEO and co-founder of Aidoc. "We're proud to partner with a leading, innovative hospital network like Novant Health, which serves a large portion of the population in the three states its facilities are located in. Together, through our AI solutions and their state-of-the-art facilities, we will enable radiologists and related hospital providers to expedite care for tens of thousands of patients, contributing toward a mitigation of the current emergency room situations and setting an example for integrating innovation during turbulent and non-turbulent periods."

About Aidoc

Aidoc delivers the most comprehensive and widely-used portfolio of AI solutions, supporting providers by flagging patients with suspected acute conditions in real-time, expediting patient treatment and improving quality of care. Aidoc's healthcare AI platform is currently used by thousands of physicians in hospitals and radiology groups worldwide and across multiple care coordination service lines, having analyzed over 10.3 million scans in the past year. For more information, visit http://www.aidoc.com.

About Novant Health

Novant Health is an integrated network of physician clinics, outpatient facilities and hospitals that delivers a seamless and convenient healthcare experience to communities in North Carolina, South Carolina, and Georgia. The Novant Health network consists of more than 1,800 physicians and over 35,000 employees who provide care at nearly 800 locations, including 15 hospitals and hundreds of outpatient facilities and physician clinics. In 2021, Novant Health was the highest-ranking healthcare system in North Carolina to be included on Forbes' Best Employers for Diversity list. Diversity MBA Magazine ranked Novant Health first in the nation on its 2021 list of "Best Places for Women & Diverse Managers to Work." In 2020, Novant Health provided more than $1.02 billion in community benefit, including financial assistance and services.

For more information, please visit our website at NovantHealth.org. You can also follow us on Twitter and Facebook.

Ariella ShohamVP Marketing ariella@aidoc.com

Cision

View original content:https://www.prnewswire.com/news-releases/aidoc-partners-with-novant-health-providing-imaging-ai-to-expedite-treatment-for-patients-in-the-emergency-department-301466537.html

SOURCE Aidoc

View original post here:

Aidoc partners with Novant Health, providing imaging AI to expedite treatment for patients in the emergency department - Yahoo Finance

Posted in Ai | Comments Off on Aidoc partners with Novant Health, providing imaging AI to expedite treatment for patients in the emergency department – Yahoo Finance

Hardware accelerators in the world of Perception AI – Analytics India Magazine

Posted: at 10:34 am

Perception systems can be defined as a machine or Edge device, which has embedded advanced intelligence, which can perceive its surroundings, taking meaningful abstractions out of it and allow itself to take some decisions in real-time, said Pradeep Sukumaran, VP, AI&Cloud at Ignitarium, at the Machine Learning Developers Summit (MLDS) in his talk titled Hardware Accelerators in the World of Perception AI.

The key components of perception system AI include sensing systems like camera, Lidar, Radar, and microphone.

Pradeep says, Looking at the cost and power parameters, and now with the advent of Deep Learning, which is a subset of ML, and availability of some very interesting hardware options, I think this has opened up the use of Deep Learning. In some cases, completely replacing the traditional signal processing algorithms going way beyond what was done earlier, in terms of the amount of data it can process and also in some cases, there is a combination of traditional signal processing with Deep Learning.

Perception AI: Use cases

Automotive and Robotics

Sensors guide a truck from source to destination on dedicated lanes in the trucking industry. There are also lower end-use cases like robotics which can be used for services or delivery where the robots use sensors to understand their surroundings to find their way around.

Predictive maintenance

Companies use vibration sensors attached to motors to understand specific signatures. And these are typically done using ESP pattern recognition, but now they are being replaced by ML and Deep Learning which can be used by low power hardware.

Surveillance

Surveillance is done with a combination of Deep Learning and specialises hardware in the multimodal use cases. Now, there are multiple sensors with audio and video combined, trying to get information from the surroundings. The 2D cameras with 3D LiDARS can be used in traffic junctions to monitor vehicles and pedestrian movement. Sometimes, the 2D cameras miss out on many images due to excessive light or rain or environmental conditions obstructing standard cameras. 3D LiDAR can actually detect objects in such conditions and use a combination of these two to get the traffic pattern for a more intelligent traffic management system.

Medical equipment

The medical field is also using Deep Learning and FPGAs specifically for smart surgery, smart surgical equipment etc.

Edge AI

General Purpose Hardware like the CPU, DSP , GPU is strapped to DNN Engine.

Deep learning models require specific hardware to run it efficiently. These are called DNN engines. So they are strapping these to the CPUs and DSPs and the GPUs, basically allowing the CPUs to offload some of the work to these engines that are tightly coupled to the same chip. The general-purpose hardware is now getting variance and is tuned for AI.

FPGAs are programmable devices, and the companies providing FPGAs want to enable AI in their key applications across industries. They want to get high performance with low power where you can write the code, burn it in the FPGA, and design it on the field. The trade off is the lack of software developer friendliness. The developers have to use hardware to implement neural nets. However, companies are building tools and SDKs that make it easier, but still its a long way to go.

ASICs are basically application specific integrated circuits specifically designed for AI workloads.

See original here:

Hardware accelerators in the world of Perception AI - Analytics India Magazine

Posted in Ai | Comments Off on Hardware accelerators in the world of Perception AI – Analytics India Magazine

UB-educated data scientist urges more women to work in AI – UB Now: News and views for UB faculty and staff – University at Buffalo Reporter

Posted: at 10:34 am

Data scientist Darshana Govind believes that STEM and data science especially artificial intelligence are great fields for female researchers.

Its challenging because you dont see a lot of women in the field, says Govind, who recently earned her doctorate in computational cell biology, anatomy and pathology from the Jacobs School of Medicine and Biomedical Sciences at UB. Id like to see more women join STEM (science, technology, engineering and math) and data science. Its a great field to be in. Its hard to be one female in a room full of men, so I encourage more women to join AI teams.

Realizing the potential of AI to make a difference in peoples lives by transforming health care is what really drew me to the field, she explains. Plus, its exciting to be a part of groundbreaking research, especially when youre surrounded by brilliant researchers from whom you get to learn every day. Ive been able to learn a lot of new science and engineering by being part of a field with a rapid pace of development that is multidisciplinary.

While at UB, Govind conducted her research in the lab of her mentor, Pinaki Sarder, associate professor of pathology and anatomical sciences. Sarder is a big supporter of her work.

One of my goals at UB is not only to do research, but also to develop a workforce, and thats very important, Sarder says. Darshana has done excellent, very difficult work for her PhD and has been published in a top journal. He notes that while the situation is improving, there still arent many women working in artificial intelligence right now.

Govind, who now works as a data scientist at Janssen Pharmaceuticals, a division of Johnson & Johnson, is a strong proponent of women in data science and AI.

Data science and AI have enabled us to leverage petabytes of data to extract meaningful information in a variety of different fields. In health care, we are now able to mine volumes of medical data to optimize patient diagnosis and treatment response. Its a game-changer, and we need more data scientists, says Govind, whose doctoral degree is to be conferred next month. Unfortunately, there is currently a major gender gap in this field, with less than one-third of data scientists being women. Its important to have women play an equal role in this industry and incorporate our voices and perspectives while developing major impactful technologies.

Additionally, this field is fueled by creativity and innovation, and we need as many diverse minds as possible to come up with novel solutions to critical problems, she adds.

Allison Brashear, vice president for health sciences and dean of the Jacobs School, notes that its no secret that men tend to outnumber women majoring in the STEM fields in college. Part of the problem is that gender stereotypes and a shortage of diverse role models perpetuate gender STEM gaps, Brashear says. In higher education, its of utmost importance that we increase opportunities in STEM for women. Although some progress has been made in recruiting women to some fields, like biological sciences and computer science, we still have a long way to go toward narrowing the gender pay gap in STEM careers and ensuring a more diverse body of STEM researchers in higher education.

I commend Dr. Govind for actively encouraging more women to enter STEM fields. Now more than ever, women at the start of their educational journeys need support and access to fields where they are underrepresented, she adds.

Govind says notable women like Joy Buolamwini, whose TED Talk on algorithmic bias has more than 1 million views, and Fei-Fei Li, co-director of Stanford Universitys Human-Centered AI Institute, are at the forefront of AI and have played a major role in encouraging more inclusion and diversity in the field. In addition, organizations like Women in Data Science and Women in AI have enabled the formation of large communities that support women and minorities in the field.

That being said, we are still vastly underrepresented in this field, Govind says, and I believe all of us have a role to play in encouraging and empowering women to close this gender gap.

See the rest here:

UB-educated data scientist urges more women to work in AI - UB Now: News and views for UB faculty and staff - University at Buffalo Reporter

Posted in Ai | Comments Off on UB-educated data scientist urges more women to work in AI – UB Now: News and views for UB faculty and staff – University at Buffalo Reporter

AI-driven fintech MDOTM bets on game theory – City A.M.

Posted: at 10:34 am

Monday 24 January 2022 9:00 am

MDOTM, a London-based fintech providing AI-driven investment strategies, has acquired the team of Mercurius Betting Intelligence, a company specialized in betting models and sports prediction with artificial intelligence.

Following the 6.2m (5.1m) Series-B round raised in September 2021 which brought MDOTMs total funding to 8.2m the agreement will incorporate Mercurius skills in AI applied to game theory, rare events forecasting and tail risk management into MDOTM expertise. The acquisition also makes the firm one of Europe and the UKs largest investment-focused AI teams with more than 50 data scientists and finance experts split between London and Milan.

Founded by a group of mathematicians in 2017, Mercurius Betting Intelligence specialises in AI technology to control, trade and capitalise on sports betting markets.

The company uses proprietary deep learning algorithms to systematically analyse millions of data points per match, in order to find and exploit inefficient odds in the betting markets.

The unique know-how brought by Mercurius will keep bringing our clients the best technology for investment decision-making, Tommaso Migliore, CEO and founder of MDOTM, said.

Read more:

AI-driven fintech MDOTM bets on game theory - City A.M.

Posted in Ai | Comments Off on AI-driven fintech MDOTM bets on game theory – City A.M.

IBM sells off Watson AI healthcare unit – Verdict

Posted: at 10:34 am

IBM is to sell off the data assets of its AI-powered Watson Health operation to private equity firm Francisco Partners, in move which pretty much ends the story for the beleaguered healthcare unit.

Announced on Friday, exact details of the deal were undisclosed, but media reports suggested a sale of more than $1bn.

The sell-off marks an end to IBMs healthcare ambitions, as the Armonk, NY based giant concentrates on its hybrid cloud business. Where reports earlier in the month had suggested IBM was looking to sell off the entirety of the poorly performing unit, last weeks deal is a sale of Watson Healths data and analytics assets.

IBM had previously built out Watson Health through a series of acquisitions totalling more than $4bn. The division facilitated medical research and solution making with a proprietary tool powered by artificial intelligence (AI) and the Watson supercomputer.

Todays agreement with Francisco Partners is a clear next step as IBM becomes even more focused on our platform-based hybrid cloud and AI strategy, said Tom Rosamilia, Senior Vice President, IBM Software in a statement.

IBM remains committed to Watson, our broader AI business, and to the clients and partners we support in healthcare IT.

Trouble first hit Watson Health when a 2013 venture with MD Anderson Cancer Center to eradicate cancer promised more than it delivered. The project closed in 2017 after a total spend of $62m.

Other healthcare providers pulled away from the service, implying a lack of efficacy. 2018 saw an undisclosed number of layoffs at Watson Health, with its chief stepping down in the same year.

As of 2021, the unit recorded $1bn in annual revenue and no profit.

The move echoes McDonalds recent AI sell-off due to disappointing results, and rumours of an AI winter a period of diminished interest and investment in AI, as has happened more than once already have been circulating in recent times. But its unlikely Big Blue will move completely away from Watson or AI in the immediate future.

IBM, which has been pivoting away from healthcare ventures following the appointment of new CEO Arvind Krishna, is instead refocusing its AI tools to different sectors.

The companys latest AI products show a pivot towards customer engagement, worker productivity and technician aids.

AI in healthcare meanwhile remains a lucrative field:Microsoft acquired the UKs Nuance Communications for $19.7bn last April. The companys primary offering is an AI tool that transcribes doctors notes, along with customer service calls and voicemails to healthcare providers.

GlobalDataforecasts in a recent report on AI in healthcare that the market for AI platforms for the entire health industry will reach $4.3bn by 2024.

IBM Watson Health should be a cautionary tale for all AI vendors, says GlobalData research director Ed Thomas. It demonstrates the potential consequences of overpromising and underdelivering.

The irony is that IBM is getting out of healthcare technology just when others are making big bets on the sector. Oracle bought Cerner for $28 billion in December, while healthcare expertise is a significant factor behind Microsofts $20 billion move for Nuance.

Link:

IBM sells off Watson AI healthcare unit - Verdict

Posted in Ai | Comments Off on IBM sells off Watson AI healthcare unit – Verdict

Top AI and Data Science Hackathons to Apply for in 2022 – Analytics Insight

Posted: at 10:34 am

If you wish to sharpen your skills in AI and data science, you should apply for these hackathons in 2022

With the growing popularity of data science and AI, more people are interested in learning about this sector and there is no better way to learn something through more and more practice and updating the skills. Hackathons are events where people from different corners can come together under the name of competition sharpen their skills and learn more about their competitors. Here are the top AI and data science hackathons for you to apply in 2022.

HackerEarth is a good place for Beginners. It is a place where programmers from all over the world come together to solve problems in a wide range of Computer Science domains such as algorithms, machine learning, or artificial intelligence, as well as to practice different programming paradigms like functional programming.

MachineHack is an online platform for Machine Learning competitions. It is a growing platform with a mission to support the ever-growing data science community and help young aspirants learn and improve their skills in the field of analytics. They host tough business problems that can only find solutions in Machine Learning & Data Science.

Data science hackathons on DataHack enable you to compete with leading data scientists and machine learning experts in the world. This is your chance to work on real-life data science problems, improve your skillset, learn from expert data science and machine learning professionals, and hack your way to the top of the hackathon leaderboard! You also stand a chance to win prizes and get a job at your dream data science company.

The WiDS Datathon aims to inspire women worldwide to learn more about data science, and to create a supportive environment for women to connect with others in their community who share their interests. This years WiDS Datathon, organized by the WiDS Worldwide team, Harvard University IACS, and the WiDS Datathon Committee, will tackle a key way to mitigate the effects of climate change with a focus on energy efficiency. The WiDS Datathon Committee is partnering with experts from many disciplines at Climate Change AI, Lawrence Berkeley National Laboratory (Berkeley Lab), US Environmental Protection Agency (EPA), and MIT Critical Data. WiDS Datathon participants will analyze regional differences in building energy efficiency, creating models to predict building energy consumption.

For this years Helmholtz GPU Hackathon the organizers have partnered with the Helmholtz Information & Data Science Academy and the AI Campus in Berlin as well as NVIDIA and OpenACC.org, to provide a unique opportunity to all scientists and industry partners to accelerate their AI research and/or HPC codes. All teams will receive expert mentorship from academia and industry leaders to work in a collaborative environment. The Helmholtz GPU Hackathon is a multi-day event designed to help teams of three to six developers accelerate their own codes on GPUs using a programming model or machine learning framework of their choice. Each team is assigned mentors for the duration of the event.

Smart India Hackathon 2022 is a nationwide initiative to provide students a platform to solve some of the pressing problems we face in our daily lives, and thus inculcate a culture of product innovation and a mindset of problem-solving. The themes of this hackathon vary from smart automation to heritage and culture and fitness and many more.

This is an AI-based hackathon that is centered in Canada. The competition is sprint styled and it aims to enable students from all kinds of backgrounds to come together and innovate something unique. It is presented by McMaster artificial intelligence society. The Hackathon will take place from January 21st to 23rd.

Share This ArticleDo the sharing thingy

Read more:

Top AI and Data Science Hackathons to Apply for in 2022 - Analytics Insight

Posted in Ai | Comments Off on Top AI and Data Science Hackathons to Apply for in 2022 – Analytics Insight

Page 73«..1020..72737475..8090..»