Daily Archives: March 17, 2022

There’s more to AI Bias than biased data, NIST report highlights – YubaNet

Posted: March 17, 2022 at 3:08 am

As a step toward improving our ability to identify and manage the harmful effects of bias in artificial intelligence (AI) systems, researchers at the National Institute of Standards and Technology (NIST) recommend widening the scope of where we look for the source of these biases beyond the machine learning processes and data used to train AI software to the broader societal factors that influence how technology is developed.

The recommendation is a core message of a revised NIST publication,Towards a Standard for Identifying and Managing Bias in Artificial Intelligence(NIST Special Publication 1270), which reflects public comments the agency received on itsdraft versionreleased last summer. As part of alarger effortto support the development of trustworthy and responsible AI, the document offers guidance connected to theAI Risk Management Frameworkthat NIST is developing.

According to NISTs Reva Schwartz, the main distinction between the draft and final versions of the publication is the new emphasis on how bias manifests itself not only in AI algorithms and the data used to train them, but also in the societal context in which AI systems are used.

Context is everything, said Schwartz, principal investigator for AI bias and one of the reports authors. AI systems do not operate in isolation. They help people make decisions that directly affect other peoples lives. If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the publics trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.

Bias in AI can harm humans. AI can make decisions that affect whether a person is admitted into a school, authorized for a bank loan or accepted as a rental applicant. It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group. The revised NIST publication acknowledges that while thesecomputational and statisticalsources of bias remain highly important, they do not represent the full picture.

A more complete understanding of bias must take into accounthuman and systemicbiases, which figure significantly in the new version. Systemic biases result from institutions operating in ways that disadvantage certain social groups, such as discriminating against individuals based on their race. Human biases can relate to how people use data to fill in missing information, such as a persons neighborhood of residence influencing how likely authorities would consider the person to be a crime suspect. When human, systemic and computational biases combine, they can form a pernicious mixture especially when explicit guidance is lacking for addressing the risks associated with using AI systems.

If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the publics trust in AI. Many of these factors go beyond the technology itself to the impacts of the technology. Reva Schwartz,principal investigator for AI bias

To address these issues, the NIST authors make the case for a socio-technical approach to mitigating bias in AI. This approach involves a recognition that AI operates in a larger social context and that purely technically based efforts to solve the problem of bias will come up short.

Organizations often default to overly technical solutions for AI bias issues, Schwartz said. But these approaches do not adequately capture the societal impact of AI systems. The expansion of AI into many aspects of public life requires extending our view to consider AI within the larger social system in which it operates.

Socio-technical approaches in AI are an emerging area, Schwartz said, and identifying measurement techniques to take these factors into consideration will require a broad set of disciplines and stakeholders.

Its important to bring in experts from various fields not just engineering and to listen to other organizations and communities about the impact of AI, she said.

NIST is planning a series of public workshops over the next few months aimed at drafting a technical report for addressing AI bias and connecting the report with the AI Risk Management Framework. For more information and to register, visit theAI RMF workshop page.

The rest is here:

There's more to AI Bias than biased data, NIST report highlights - YubaNet

Posted in Ai | Comments Off on There’s more to AI Bias than biased data, NIST report highlights – YubaNet

Run:ai Seeks to Grow AI Virtualization with $75M Round – Datanami

Posted: at 3:08 am

Run:ai, a provider of an AI virtualization layer that helps optimize GPU instances, yesterday announced a Series C round worth $75 million. The funding figures to help the fast-growing company expand its sales reach and further development the platform.

GPUs are the beating heart of deep learning today, but the limited nature of the computing resource means AI teams are constantly battling to squeeze the most work out of them. Thats where Run:ai steps in with its flagship product, dubbed Atlas, which provides a way for AI teams to get more bang for their GPU buck.

We do for AI hardware what VMware and virtualization did for traditional computingmore efficiency, simpler management, greater user productivity, Ronen Dar, Run:ais CTO and co-founder, says in a press release. Traditional CPU computing has a rich software stack with many development tools for running applications at scale. AI, however, runs on dedicated hardware accelerators such as GPUs which have few tools to help with their implementation and scaling.

Atlas abstracts AI workloads away from GPUs by creating virtual pools where GPU resources can be automatically and dynamically allocated, thereby gaining more efficiency from GPU investments, the company says.

The platform also brings queuing and prioritization methods to deep learning workloads running on GPUs, and develops fairness algorithms to ensure users have an equal chance at getting access to the hardware. The companys software also enables clusters of GPUs to be managed as a single unit, and also allows a single GPU to be broken up into fractional GPUs to ensure better allocation.

Atlas functions as a plug-in to Kubernetes, the open source container orchestration system. Data scientists can get access to Atlas via integration to IDE tools like Jupyter Notebook and PyCharm, the company says.

The abstraction brings greater efficiency to data science teams who are experimenting with different techniques and trying to find what works. According to a December 2020 Run:ai whitepaper, one customer was able to reduce their AI training time from 46 days to about 36 hours, which represents a 3,000% improvement, the company says.

With Run:ai Atlas, weve built a cloud-native software layer that abstracts AI hardware away from data scientists and ML engineers, letting Ops and IT simplify the delivery of compute resources for any AI workload and any AI project, Dar continues.

The Tel Aviv company, which was founded in 2018, has experienced a 9x increase in annual recurring revenue (ARR) over the past 12 months, during which time the companys employee count has tripled. The company has also quadrupled its customer base over the past two years. The Series C round, which brings the companys total funding to $118 million, will be used to grow sales as well as enhancing its core platform.

When we founded Run:ai, our vision was to build the de- facto foundational layer for running any AI workload, says Omri Geller, Run:ai CEO and co-founder in the press release. Our growth has been phenomenal, and this investment is a vote of confidence in our path. Run:ai is enabling organizations to orchestrate all stages of their AI work at scale, so companies can begin their AI journey and innovate faster.

Run:ais platform and growth caught the eyes of Tiger Global Management, which co-led the Series C round with Insight Partners, which led the Series B round. Other firms participating in the current round included existing investors TLV Partners and S Capital VC.

Run:ai is well positioned to help companies reimagine themselves using AI, says Insight Partners Managing Director Lonne Jaffe, who you might remember was the CEO of Syncsort (now Precisely) nearly a decade ago.

As the Forrester Wave AI Infrastructure report recently highlighted, Run:ai creates extraordinary value by bringing advanced virtualization and orchestration capabilities to AI chipsets, making training and inference systems run both much faster and more cost-effectively, Jaffe says in the press release.

In addition to AI workloads, Run:ai can also be used to optimize HPC workloads.

Related Items:

Optimized Machine Learning Libraries For CPUS Exceed GPU Performance

Optimizing AI and Deep Learning Performance

AI Hypervisor Gets a GPU Boost

Read more here:

Run:ai Seeks to Grow AI Virtualization with $75M Round - Datanami

Posted in Ai | Comments Off on Run:ai Seeks to Grow AI Virtualization with $75M Round – Datanami

When it comes to AI, can we ditch the datasets? – MIT News

Posted: at 3:08 am

Huge amounts of data are needed to train machine-learning models to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, these data are not always easy to come by. Datasets may cost millions of dollars to generate, if usable data exist in the first place, and even the best datasets often contain biases that negatively impact a models performance.

To circumvent some of the problems presented by datasets, MIT researchers developed a method for training a machine learning model that, rather than using a dataset, uses a special type of machine-learning model to generate extremely realistic synthetic data that can train another model for downstream vision tasks.

Their results show that a contrastive representation learning model trained using only these synthetic data is able to learn visual representations that rival or even outperform those learned from real data.

This special machine-learning model, known as a generative model, requires far less memory to store or share than a dataset. Using synthetic data also has the potential to sidestep some concerns around privacy and usage rights that limit how some real data can be distributed. A generative model could also be edited to remove certain attributes, like race or gender, which could address some biases that exist in traditional datasets.

We knew that this method should eventually work; we just needed to wait for these generative models to get better and better. But we were especially pleased when we showed that this method sometimes does even better than the real thing, says Ali Jahanian, a research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper.

Jahanian wrote the paper with CSAIL grad students Xavier Puig and Yonglong Tian, and senior author Phillip Isola, an assistant professor in the Department of Electrical Engineering and Computer Science. The research will be presented at the International Conference on Learning Representations.

Generating synthetic data

Once a generative model has been trained on real data, it can generate synthetic data that are so realistic they are nearly indistinguishable from the real thing. The training process involves showing the generative model millions of images that contain objects in a particular class (like cars or cats), and then it learns what a car or cat looks like so it can generate similar objects.

Essentially by flipping a switch, researchers can use a pretrained generative model to output a steady stream of unique, realistic images that are based on those in the models training dataset, Jahanian says.

But generative models are even more useful because they learn how to transform the underlying data on which they are trained, he says. If the model is trained on images of cars, it can imagine how a car would look in different situations situations it did not see during training and then output images that show the car in unique poses, colors, or sizes.

Having multiple views of the same image is important for a technique called contrastive learning, where a machine-learning model is shown many unlabeled images to learn which pairs are similar or different.

The researchers connected a pretrained generative model to a contrastive learning model in a way that allowed the two models to work together automatically. The contrastive learner could tell the generative model to produce different views of an object, and then learn to identify that object from multiple angles, Jahanian explains.

This was like connecting two building blocks. Because the generative model can give us different views of the same thing, it can help the contrastive method to learn better representations, he says.

Even better than the real thing

The researchers compared their method to several other image classification models that were trained using real data and found that their method performed as well, and sometimes better, than the other models.

One advantage of using a generative model is that it can, in theory, create an infinite number of samples. So, the researchers also studied how the number of samples influenced the models performance. They found that, in some instances, generating larger numbers of unique samples led to additional improvements.

The cool thing about these generative models is that someone else trained them for you. You can find them in online repositories, so everyone can use them. And you dont need to intervene in the model to get good representations, Jahanian says.

But he cautions that there are some limitations to using generative models. In some cases, these models can reveal source data, which can pose privacy risks, and they could amplify biases in the datasets they are trained on if they arent properly audited.

He and his collaborators plan to address those limitations in future work. Another area they want to explore is using this technique to generate corner cases that could improve machine learning models. Corner cases often cant be learned from real data. For instance, if researchers are training a computer vision model for a self-driving car, real data wouldnt contain examples of a dog and his owner running down a highway, so the model would never learn what to do in this situation. Generating that corner case data synthetically could improve the performance of machine learning models in some high-stakes situations.

The researchers also want to continue improving generative models so they can compose images that are even more sophisticated, he says.

This research was supported, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.

Read the rest here:

When it comes to AI, can we ditch the datasets? - MIT News

Posted in Ai | Comments Off on When it comes to AI, can we ditch the datasets? – MIT News

Training an AI system is time consuming, but this startup says it has a solution – Morning Brew

Posted: at 3:08 am

In northeast England, halfway between Norfolk and Yorkshire, an AI-powered robot spends its days looking at strawberries. Its not as easy as it sounds.

A human farmer can gauge a strawberrys ripeness level by sight and weight, but the process involves putting each strawberry on a scale, which can be destructive and time-consuming. The robot can do the same job for up to 4 million strawberries a day by performing a simple scan of the fruit, undisturbed.

FruitCast, the agricultural AI startup behind the robots, taught its bots how to do their jobs with data from V7 Labs, a London-based startup that helps AI companies automate the training-data process for models. Training can be one of the most labor-intensive parts of getting an AI system off the ground, since it often calls for not only time and resources, but also vetted and relevant data.

The robots are kind of stupid until you put the intelligence on them, Raymond Tunstill, CTO of FruitCast, which was spun off from the University of Lincolns food-tech institute, told Emerging Tech Brew. He added, Its all about taking examples from the real worldis it a ripe strawberry, or is it unripeand showing that to our neural networks so that the neural networks can, essentially, learn. And without V7, we never wouldve been able to classify [them].

Since its 2018 debut, V7 has used its computer vision platform to train AI models to identify everything from lame cows to grapevine bunches, depending on the clients needs. In 2020, V7 raised a $10 million total seed round, and so far, its clients include more than 300 AI companies, as well as academic institutions like Stanford, MIT, and Harvard.

The secret behind V7 is this system that we call AutoAnnotate, the startups CEO Alberto Rizzoli told us. He and his cofounder, Simon Edwardsson, thought it up based on obstacles encountered in their previous business venture: Aipoly, a computer-vision startup that allowed blind users to identify objects using their phone cameras. Though the software worked decently well, Rizzoli recalled, training data was the really difficult part to create.

So they created AutoAnnotate, a general-purpose AI model for computer vision. When a client comes to V7 with training dataimages or videos theyd like an AI model to learn fromV7 detects the objects boundaries in each frame (like strawberries, for instance), and then uses AutoAnnotate to label it. According to its internal measurements, labeling a high-quality piece of training data could take a human up to 2 minutes, said Rizzoli, compared to about 2.5 seconds for AutoAnnotate.

Drones, automation, AI, and more. The technologies that will shape the future of business, all in one newsletter.

To create that training data, V7s model starts off with a continual learning approach. That could begin with subject matter experts in, say, horticulture, drawing boxes around images of fruit and classifying it by ripeness level (e.g., a level-3 strawberry). They then either accept or correct each of the models attempts to do the same.

After about 100 human-guided examples, a model is able to make relatively confident classifications, so it transitions into what Rizzoli calls a co-pilot approachfor any given choice, the AI provides its confidence score and the human makes corrections.

Because its training data, we always have a human verify it, but it becomes a faster process, Rizzoli said. Later, he added, When they find something that is low-confidence, they fix it, otherwise it can go into the knowledge of the modelof the training set.

The company finds human experts via a network of business process outsourcing companies, agencies, and consultants, which Rizzoli claims can find a group of labelers on most topics within 48 hours.

Think of it like sending your pup to dog training camp and still having responsibilities upon its return. When a customer develops their fully-trained model through V7, theyll still need to keep an eye on it and correct any glaring mistakes, but it should, in theory, be much more capable than before. For example, a newly-trained model may be well-equipped to detect strawberry ripeness levels, but if its somehow presented with a photo of a strawberry keychain, it wont know how to proceed.

Even if a model does become an expert in its domain, its risky to use it for tasks besides what its specifically trained for, since results could be unpredictable.

If you have a car that is trained on data from the United States, its able to have certain weather conditions, it's not able to do certain road signs, and to figure out whether it can actually drive on snow or desert, you need to test ityou need to run it on a data set of desert-driving footage and check the accuracy, Rizzoli said. Believe it or not, this sounds pretty straightforward, but there are almost no tools for doing this. And very few people are actually doing benchmarking on training data, because its a new thing.

Continue reading here:

Training an AI system is time consuming, but this startup says it has a solution - Morning Brew

Posted in Ai | Comments Off on Training an AI system is time consuming, but this startup says it has a solution – Morning Brew

Chipotle Is Testing An AI-Driven Robot To Make Its Tortilla Chips – Forbes

Posted: at 3:08 am

Chipotle is testing Miso Robotics' technology, called Chippy, to make tortilla chips

Chipotle shared its tortilla chip recipe on TikTok in 2020, opening up the opportunity for fans to duplicate that recipe at home.

Now the company is exploring whether it can hand off that duty in restaurants to a robot named Chippy.

The company today announced a test with Miso Robotics that brings artificial intelligence-driven Chippy to its Chipotle Cultivate Center in Irvine, California. Chippy is programmed to replicate Chipotles exact recipe, with corn masa flour, water, sunflower oil, salt and lime juice.

The plan is to eventually integrate Chippy technology into a Chipotle restaurant in Southern California later this year. From there, the company will lean on employee and guest feedback before developing a broader rollout strategy.

According to a press release, Chippy is the first and only robot that uses artificial intelligence to make tortilla chips.

That said, its certainly not the only robotics technology catching the interest of restaurant operators as they work to automate tasks and alleviate labor pressures.

The demand for back-of-house automation is evidenced by headlines throughout the past two years in particular. Perhaps the biggest headline is White Castles recent deployment of burger-flipping robot Flippy at more than 100 of its locations. The burger chain is also leveraging Misos technology for the test and began testing Flippy in 2020.

Misos Flippy Wings is also in test at Inspire Brands Buffalo Wild Wings.

Further, Saladworks has been working with Chowbotics to deploy a saladmaking robot called Sally, while Jamba has partnered with autonomous food platform Blendid to automate smoothies.

In fact, the cooking robotics space is expected to grow by over 16% a year through 2028 with an estimated worth of $322 million by 2028.

During a recent interview, Chief Technology Officer Curt Garner said the company is leveraging everything from internet of things to machine learning in an effort to run restaurants more efficiently.

When you see us leaning into this space, it will be a question of are there better tools to help our crews versus removing a task? Those are the kind of things were looking at, he said.

Garner added the companys goal is to enable crew members to focus on other tasks in the restaurant.

Its worth noting that autonomous technology isnt just being deployed in the kitchen, but across restaurant operations. Chipotle, for instance, is also testing autonomous delivery through its partnership with Nuro, while delivery bot firm Starship has raised $100 million since January. Earlier this week, Bear Robotics recently secured $81 million to expand its robotics solution, Servi, which busses tables and delivers food and drinks.

Operators were testing the autonomous robot waters before Covid-19, but the pandemiclike all things tech-relatedaccelerated the space as operators scrambled to find efficiencies. Simultaneously, customers were growing more used to such technologies and came to expect contactless solutions.

Perhaps the biggest draw, however, is the labor-saving component. Autonomous delivery, for instance, frees up the need for a driver which could help cut some of the steep costs that have hindered the delivery model.

Along those labor lines, Chipotle makes a lot of tortilla chips and Chippy can ease that tedium in the kitchen.

According to the National Restaurant Associations 2022 State of the Industry report, operators expect labor shortages to continue this year and most (including 78% of quick-service operators) plan to leverage automation to help fill those gaps. Two-thirds of restaurant operators say technology and automation will become more common this year.

Chippy could undoubtedly drive the technology closer to a tipping point. If the technology clears Chipotles stage-gate testing process, it has the potential to rollout out to the chains 3,000-and-growing footprint.

Here is the original post:

Chipotle Is Testing An AI-Driven Robot To Make Its Tortilla Chips - Forbes

Posted in Ai | Comments Off on Chipotle Is Testing An AI-Driven Robot To Make Its Tortilla Chips – Forbes

NHS rolls out AI tool which detects heart disease in 20 seconds – Healthcare IT News

Posted: at 3:08 am

The NHS has rolled out a new artificial intelligence (AI) tool which can detect heart disease in just 20 seconds while patients are in an MRI scanner.

A British Heart Foundation (BHF) funded study published in the Journal of Cardiovascular Magnetic Resonance concluded the machine analysis had superior precision to three clinicians. It would usually take a doctor 13 minutes or more to manually analyse images after an MRI scan has been performed.

The technology is being used on more than 140 patients a week at University College London (UCL) Hospital, Barts Heart Centre at St Bartholomews Hospital, and Royal Free Hospital. Later this year it will be introduced to a further 40 locations across the UK and globally.

WHY IT MATTERS

Around 120,000 heart MRI scans are performed annually in the UK. Researchers say the AI will help with the backlog in vital heart care by saving around 3,000 clinician days a year, enabling healthcare professionals to see more waiting list patients. It can also give patients and doctors more confidence in results and assist decision-making about possible treatment and surgeries.

THE LARGER CONTEXT

There has been increasing interest in the role of AI to support disease diagnosis. The NHS AI LAB recently announced it has created a blueprint for testing the robustness of AI models, after running a proof-of-concept validation process on five AI models / algorithms using data from the National COVID-19 Chest Imaging Database (NCCID).

ON THE RECORD

Dr Sonya Babu-Narayan, BHF associate medical director, said: This is a huge advance for doctors and patients, which is revolutionising the way we can analyse a persons heart MRI images to determine if they have heart disease at greater speed.

The pandemic has resulted in a backlog of hundreds of thousands of people waiting for vital heart scans, treatment and care. Despite the delay in cardiac care, whilst people remain on waiting lists, they risk avoidable disability and death. Thats why its heartening to see innovations like this, which together could help fast-track heart diagnoses and ease workload so that in future we can give more NHS heart patients the best possible care much sooner.

Dr Rhodri Davies, BHF-funded researcher at UCL and Barts Heart Centre, said: Our new AI reads complex heart scans in record speed, analysing the structure and function of a patients heart with more precision than ever before. The beauty of the technology is that it replaces the need for a doctor to spend countless hours analysing the scans by hand.

We are continually pushing the technology to ensure its the best it can be, so that it can work for any patient with any heart disease. After this initial roll-out on the NHS, well collect the data, and further train and refine the AI so it can be accessible to more heart patients in the UK and across the world.

Excerpt from:

NHS rolls out AI tool which detects heart disease in 20 seconds - Healthcare IT News

Posted in Ai | Comments Off on NHS rolls out AI tool which detects heart disease in 20 seconds – Healthcare IT News

SambaNova and DeLorean Team Up to Deliver AI-Powered Renal Care – HPCwire

Posted: at 3:08 am

ORLANDO, Fla., HIMSS, March 16, 2022 SambaNova SystemsandDeLorean Artificial Intelligenceannounce an AI solution to significantly advance renal care.Ascend Clinical, a leading dialysis testing laboratory, is the first customer to leverage the AI solution with deep learning models to classify, track and transition renal patients through disease states and provide recommended actions for treatment and care.

We are committed to providing world-class care to our patients, so our laboratories need the most innovative technology, said Paul F. Beyer, CEO of Ascend. Were looking forward to advancing our AI initiatives to provide a higher level of precision and improve our customers insights into their own data.

The DeLorean AI Medical Renal Model running on SambaNovas platform ingests both structured and unstructured data from internal sources (Ascend or dialysis centers) and external sources like medical records, lab results, previous claims and procedural data. The AI model predicts if a patient will be high- or low-risk and then recommends the next best action for a healthcare professional. This is the first solution on the market to accurately predict risk, provide next best actions, and empower nursing management to hold caregivers accountable for their quality of care.

The financial implications of the AI model are significant, including decreased operating costs and a better customer experience for patients, resulting in a STAR Rating increase, Quality Bonus Payments (QBP), increased CMS Pay 4 Quality (P4Q) and increased revenue. Most importantly, patients benefit from a better quality of life and an extended lifespan.

SambaNova and DeLoreans medical AI solution is revolutionary were bringing AI to an industry that truly needs it, said Severence MacLaughlin, Ph.D, CEO and founder at DeLorean Artificial Intelligence. The solution will provide Ascend customers with value-based care by providing them with precise classifications faster and timely treatment recommendations along their patient journey.

The DeLorean/SambaNova AI solution is subscription-based, can be hosted in the cloud or on-premises, and is extensible to other areas of the business. Labs across the country can deploy the solution for tracking states like diabetes, heart disease or cancer.

The DeLorean/SambaNova AI solution provides for improved value-based care with AI and has the ability to significantly improve patient outcomes, said Rodrigo Liang, CEO and co-founder of SambaNova Systems. Were pleased to work with DeLorean AI and Ascend to showcase how AI technology can truly revolutionize the renal care industry.

About DeLorean Artificial Intelligence

DeLorean Artificial Intelligence strives to be the most HUMAIN AI in the world through building sentient and semi-sentient systems of intelligence to address healthcare, business and economic challenges for large fortune 500 companies and governments. DeLorean AI currently offers services in the Healthcare, Life Sciences, and financial industries. We offer our services in the form of AI as as Service (AIaas). For more information, please visit us at deloreanai.com or contact us atfluxcapacitor@deloreanai.com. Follow DeLorean AI on LinkedIn.

About SambaNova Systems

AI is here. With SambaNova, customers are deploying the power of AI and deep learning in weeks rather than years to meet the demands of the AI-enabled world. SambaNovas flagship offering, Dataflow-as-a-ServiceTM, is a complete solution purpose-built for AI and deep learning that overcomes the limitations of legacy technology to power the large and complex models that enable customers to discover new opportunities, unlock new revenue and boost operational efficiency. Headquartered in Palo Alto, California, SambaNova Systems was founded in 2017 by industry luminaries, and hardware and software design experts from Sun/Oracle and Stanford University. Investors include SoftBank Vision Fund 2, funds and accounts managed by BlackRock, Intel Capital, GV, Walden International, Temasek, GIC, Redline Capital, Atlantic Bridge Ventures, Celesta, and several others. For more information, please visit us atsambanova.aior contact us atinfo@sambanova.ai.

Source: SambaNova Systems

See original here:

SambaNova and DeLorean Team Up to Deliver AI-Powered Renal Care - HPCwire

Posted in Ai | Comments Off on SambaNova and DeLorean Team Up to Deliver AI-Powered Renal Care – HPCwire

How AI Helps State and Local Governments Work Smarter – Government Technology

Posted: at 3:08 am

The rapid pace of change in our world shows no sign of slowing, and with this rapid change comes increased citizen and societal expectations. The digital age demands better speed, efficiency and simplicity in all areas of life. To rise to the occasion, governments have to be more tech savvy and more open-minded to digital transformation.

To do so, some governments are turning to new AI technologies. While the term AI might sound ambiguous, its potential and its applications are quite concrete. AI can improve mobility, advance education, protect food and water safety, decrease emissions, prevent crime, increase cross-border security and even save lives.

Lets look at some ways AI can improve and reshape government agency operations.

Citizens can also use these chatbots to report traffic accidents, infrastructure problems and other public safety hazards, or obtain information needed from the city. In turn, employees have more time to handle the important task of actually resolving problems. Best yet, citizens can more easily and quickly get questions and concerns resolved at any time of day.

The city of San Jose, Calif., is a prime example. The city created the My San Jose app, a mobile self-help platform and 311 system. More than 45,000 residents now use My San Jose to interface with their local government, giving officials a real-time dashboard of citizen requests. Staff have been able to prioritize resources for issues such as illegal dumping and abandoned vehicles, and redundant calls have been reduced by about 20 percent.

The city of Las Vegas, for example, invested in intelligent automation for vehicle traffic information. The city applied AI to its connected and autonomous vehicle initiatives in its downtown area. Lyft ran trials with 40 autonomous cars, and latest-model Audi cars can also receive data feeds from the city.

Governments can also turn to intelligent technologies to meet sustainability goals with things like smart waste management systems. Sensors at trash receptables can monitor waste levels in real time via a cloud-based dashboard. Once a container is filled, the system automatically triggers an alert for the citys garbage truck fleet to service the location. Instead of emptying trash containers at set times throughout the week, they are only emptied as needed.

Organizations can apply AI to tackle tasks like image recognition and document analysis; speech and language recognition for native and non-native language speakers; anomaly detection to flag mishaps and fraud; data labeling; forecasting critical business metrics from revenue to resource requirements and more and AIs potential will only continue to grow. For state and local organizations, AI will be key to finding more meaning in the troves of information already existing in their business applications, ultimately making processes more efficient and freeing up time and resources to focus on the citizen experience.

To take advantage of all that AI has to offer, public-sector organizations need a clear vision and concrete commitment from leadership to adopt AI in intelligent and transformative ways, and to ensure their workforce has the opportunity to learn the skills required to thrive in an AI-infused world.

Continue reading here:

How AI Helps State and Local Governments Work Smarter - Government Technology

Posted in Ai | Comments Off on How AI Helps State and Local Governments Work Smarter – Government Technology

Hidden Door Launches AI Game Platform to Build the Narrative Multiverse – Business Wire

Posted: at 3:08 am

NEW YORK--(BUSINESS WIRE)--Hidden Door, a new technology studio at the intersection of machine learning and immersive entertainment, has launched with the ambitious goal of building the worlds first narrative multiverse a platform for creating and experiencing infinite stories together. Founded by AI entrepreneurs Hilary Mason and Matt Brandwein, the startup has closed a $2M pre-seed round led by Northzone, with participation from Makers Fund, Betaworks, Brooklyn Bridge Ventures, Homebrew, and angels Dan Sturman (CTO of Roblox) and Joshua Schacter (founder of del.icio.us).

Hidden Doors mission is to inspire creativity through play with AI, using natural language, vibrant art, and playful social storytelling experiences. The first product is a social game platform for playing stories with friends across a variety of shared creative universes.

In the game, friends team up to remix story worlds styled as interactive graphic novels. With the storytelling vibe of a tabletop roleplaying game including a playful AI narrator anyone can improvise endless adventures together with a host of responsively generated NPCs, items, and locations. These, in turn, can be collected, traded, and shared with friends to remix into new worlds and stories. In the future, the game will expand to include a marketplace for content from professional writers, artists, and other players.

We connect by telling stories together, said Hilary Mason, co-founder and CEO of Hidden Door. Its finally possible to build machines that make story itself computable, giving anyone the power to create stories in unique ways. The challenge is to design a system that can improvise alongside creative players while guiding diverse and safe story experiences, and we believe weve done just that.

Hidden Doors platform is built on extensible proprietary models of language and art, based on millions of stories. Through its APIs, developers can build online experiences for interactively improvising new stories together with groups of players, while adhering to rigorous content and safety standards. In the future, this will enable creators to automatically adapt existing works for this new medium.

Our investment in Hidden Door was driven by their vision for a dynamic and playful narrative platform that brings people together around the magic of telling stories, said Jay Chi, founding partner, Makers Fund. Hidden Door is well underway with a solution for what long seemed to be the impossible problem of creating an AI that can have realistic conversations with players and invent a narrative along the way. They are a standout team blending storytelling, machine learning and interactive entertainment. We cant wait to see how the community generates new ideas and stories to further the narrative.

When thinking about AI, we often frame technology and humanity as in opposition, said Wendy Xiao Schadeck, partner at Northzone. The reason we're so excited about Hidden Door is that Hilary and Matt are building a machine that actually empowers human collective creativity. That's not only powerful, it's also the multiverse we would want to live in.

At launch, players will be able to experience an initial collection of original playable worlds. The invite-only alpha will be available in early 2022 for players ages 9 and up who love role-playing games or improvising stories with friends. Sign up here to pre-register.

To follow Hidden Doors progress and receive early announcements, join the mailing list at http://www.hiddendoor.co and follow on Twitter at @hiddendoorco.

For press inquiries or interviews, please contact pr@hiddendoor.co.

About Hidden Door

Hidden Door is building the narrative multiverse at the intersection of machine learning and immersive entertainment. Launching in early 2022, our first product is a social game platform for playing stories with friends across a variety of shared creative universes. Our mission is to inspire creativity through play with AI, using natural language, vibrant art, and playful social storytelling experiences. Join our community at hiddendoor.co and follow our progress on Twitter at @hiddendoorco.

About Northzone

Northzone (northzone.com) is an early-stage venture capital fund built on experience spanning multiple economic and disruptive technology cycles and has over $1.7 billion under management. Founded in 1996 and with a team spread across three main hubs, New York, London and Stockholm, Northzone has to date raised nine funds and invested in more than 150 companies, including category-defining businesses like Spotify, iZettle, Avito, Kahoot!, Hopin, Klarna and Trustpilot.

About Makers Fund

Makers Fund is a global interactive entertainment venture capital firm focused on early-stage investments. Makers is dedicated to furthering growth and innovation in the interactive entertainment industry. With more than 80 portfolio companies to date, Makers provides founders strategic value that is deeply catered to companies across the value chain in the industry. For more information visit http://www.makersfund.com.

Read the original post:

Hidden Door Launches AI Game Platform to Build the Narrative Multiverse - Business Wire

Posted in Ai | Comments Off on Hidden Door Launches AI Game Platform to Build the Narrative Multiverse – Business Wire

The AI beauty startup that champions transparency and inclusion – Vogue Business

Posted: at 3:08 am

After lockdowns hit stores, a lot of brands suddenly had to find ways to showcase their products in innovative and online-only methods, says analyst Moorut, who notes the momentum has continued. Even if stores are [now] open for business, itll take a while until consumers feel comfortable to visit and engage in the same way as before the pandemic. Anything that beauty brands can offer that can showcase products digitally is a good tool to have in a brands arsenal.

The integration of Skin Matchs technology can take between two weeks and two months, with plans to reduce the time further. The need to work to short time frames is imperative, Benz realised early on in the companys development. With the way we were originally structured, businesses needed a lot of tech capacity for them to integrate out solutions, so it often took up to a year. Thats not a sustainable business model, she says. We worked a lot on making our solutions super simple to integrate. That has been huge for our success.

The company has raised 350,000 to date, with a further 550,000 in funds currently being finalised. Skin Match plans to invest in product development, new hires to work on the client base and even smoother integration for brands including tools such as Shopify plugins. We want to really make it an automated process to use our tools especially to make it even more affordable and feasible for the younger startups.

The process of creating a flexible pricing structure has been challenging. Over the last year and a half weve invested in finding a product market fit, says Benz. A solution that works for smaller startups interested in Skin Match tools is to start charging a fee only when they actually use the tools. The usage based approach allows us to offer our solutions to startups. So, even if youre a brand that has just launched, you can already afford us.

The Skin Match focus on inclusivity goes beyond skin tones. The company recently began working with Look Fabulous Forever, a brand founded by Tricia Cusden (at the age of 65) that targets the 50+ age group. Skin Match is developing a foundation finder that takes into account ageing skins needs, such as the changes that occur during menopause or the drier skin that comes with ageing.

More:

The AI beauty startup that champions transparency and inclusion - Vogue Business

Posted in Ai | Comments Off on The AI beauty startup that champions transparency and inclusion – Vogue Business