Page 119«..1020..118119120121..130140..»

Category Archives: Ai

New toolkit aims to help teams create responsible human-AI experiences – AI for Business – Microsoft

Posted: July 21, 2021 at 12:25 am

Microsoft has released the Human-AI eXperience (HAX) Toolkit, a set of practical tools to help teams strategically create and responsibly implement best practices when creating artificial intelligence technologies that interact with people.

The toolkit comes as AI-infused products and services, such as virtual assistants, route planners, autocomplete, recommendations and reminders, are becoming increasingly popular and useful for many people. But these applications have the potential to do things that arent helpful, like misunderstand a voice command or misinterpret an image. In some cases, AI systems can demonstrate disruptive behaviors or even cause harm.

Such negative outcomes are one reason AI developers have pushed for responsible AI guidance. Supporting responsible practices has traditionally focused on improving algorithms and models, but there is a critical need to also make responsible AI resources accessible to the practitioners who design the applications people use. The HAX Toolkit provides practical tools that translate human-AI interaction knowledge into actionable guidance.

Human-centeredness is really all about ensuring that what we build and how we build it begins and ends with people in mind, said Saleema Amershi, senior principal research manager at Microsoft Research. We started the HAX Toolkit to help AI creators take this approach when building AI technologies.

The toolkit currently consists of four components designed to assist teams throughout the user design process from planning to testing:

The idea for the HAX Toolkit evolved from the set of 18 guidelines, which are based on more than 20 years of research and initially published in a 2019 CHI paper. As the team began sharing those guidelines, they learned that additional tools could help multidisciplinary teams plan for and implement AI systems that aligned with the principles reflected in the guidelines.

The workbook came about because many teams were not exactly sure how to bring the guidelines into their workflow early enough to have a real impact. It is intended to bring clarity when teams have an idea about a feature or product but have not defined it completely, said Mihaela Vorvoreanu, director of UX Research and Responsible AI (RAI) Education for Microsofts AI Ethics and Effects in Engineering and Research (Aether) Committee, which collaborated with Microsoft Research to create the toolkit.

Most importantly, Vovoreanu said, the workbook should be used by a multidisciplinary team, including data scientists, engineers, product teams, designers and others who will work on a project.

You need to be together to have this conversation, said Vorvoreanu, who along with Amershi leads the HAX Toolkit project. The workbook gives you a common vocabulary for different disciplines to talk to each other so youre able to communicate, collaborate and create things together.

That was certainly true for Priscila Angulo Lopez, a senior data scientist on Microsofts Enterprise and Security Data and Intelligence team, who said, The workbook was the only session where all the disciplines got together to discuss the machine learning model. It gave us a single framework for vocabulary to discuss these problems. It was one of the best uses of our time.

In that session, the team collectively discovered a blind spot they realized that a solution they had in place would not in practice solve the problem it was supposed to solve for the user and were thus able to save a lot of time and resources.

Justin Wagle, a principal data science manager for Modern Life Experiences, piloted the workbook for a feature called Flagged Search in the Family Safety app. He said it helped the team think through ethical and sociotechnical impact.

It helped us all data science, product and design collaborate in a way that abstracted us from all the technicality of implementing machine learning, he said. We can talk about these very technical things but it comes down to what that means for the user.

He said the workbook also helped the team better articulate to a consumer exactly how the system works, as well as discover where the system can go wrong and how to mitigate it. Its now part of the process for every project on his team.

The HAX team set out to differentiate itself from existing human-AI interaction resources that lean toward tutorials. The toolkit gives teams specific guidelines as well as practical tools they can start using now and throughout the development lifecycle.

For example, the guidelines are divided into four groups, based on when they are most relevant to an interaction with an AI system: initially; during interaction; when the AI system gets something wrong and needs to be redirected; and over time.

In the Initially group of guidelines, there are two: 1) Make clear what the system can do and 2) Make clear how well the system can do it.

Julie Stanford, a lecturer in the computer science program at Stanford University and a principal at a design firm, used these two guidelines to clearly communicate with a client, based on data her firm had gathered. It turns out that users of the clients product were expecting the product to learn from its mistakes, something the product was not programmed to do.

In the case of Stanfords client, an introductory blurb might be one way to help users better understand the products capabilities. An introductory blurb is one of several design patterns that can be used to implement Guideline 1. The toolkit has 33 design patterns for eight of the 18 guidelines.

The design patterns provide proven solutions to specific problems, so that people do not have to reinvent the wheel and try to create their own processes, Amershi said.

This is how we generally work. We take a human-centered approach to the tools were creating ourselves. We ask what are people most struggling with? What will unblock people the fastest? What will have the biggest impact?

The HAX Design Library contains the patterns, as well as specific examples of how those patterns have been implemented by others. It can be filtered by guideline, product category and application type.

We are asking people to submit examples and patterns, Vorvoreanu said. Were hoping this design library is going to be a community library where people keep contributing and adding examples.

The final tool in the toolkit, the playbook, helps teams anticipate what might go wrong with an AI system by generating scenarios that are most common, based on a type of design. For example, a couple of the most common errors encountered by an AI-powered search feature that uses speech as its input would be transcription issues or background noise.

It can be difficult to know when it can fail until it encounters a failure situation, Vorvoreanu said. The playbook helps a team proactively and systematically explore what types of failures might occur.

Stanford learned about the guidelines during a talk Amershi gave at Stanford University. She has since incorporated them into her Design for AI course.

I felt the guidelines were so robust and thoughtful that I would make them a cornerstone of the user interface design portion of that class, she said.

In the first part of the course, students look at comparative AI experiences online, then evaluate them based on the guidelines. Later, they use the guidelines to design an AI addition for an existing project. Students prioritize which guidelines are most relevant to their project.

Stanford said the guidelines gave the students a common language and helped them to see issues they may not have noticed within the AI experiences they have every day. And when it came time to grade the students design work, Stanford had a comprehensive, fair way of measuring whether they had met their goals.

Its a really flexible tool both for teaching and practicing design, she said.

The HAX team encourages users to share their feedback on the Contact us page and submit examples to the HAX Design Library, so the HAX community can learn together.

We are hoping this can be a trusted resource where people can go to find tools throughout the end-to-end process, Amershi said. We will continue to update and create new tools as we continue to learn and work in this space.

Explore the tools by visiting the HAX Toolkit website.

To learn more about the HAX Toolkit, join Amershi and Vorvoreanu for a webinar on July 21 at 10 am PT. Register for the webinar.

The rest is here:

New toolkit aims to help teams create responsible human-AI experiences - AI for Business - Microsoft

Posted in Ai | Comments Off on New toolkit aims to help teams create responsible human-AI experiences – AI for Business – Microsoft

Scientists Train AI to Visualise the Unseen, Bringing Them Closer to the Human Understanding of the World | The Weather Channel – Articles from The…

Posted: at 12:25 am

New AI system takes its inspiration from humans: when a human sees a color from one object, we can easily apply it to any other object by substituting the original color with the new one.

Very few species on Earth, including humans, are gifted with the ability to imagine. We can form mental pictures and visualise things, which may not be present in our field of vision at the moment. We can decompose an imagined object into its attributes or components, like imagining a red apple with a green twig. We also can remove the attribute of red and green colourations from the objects and swap them to picture a green apple with a red twig.

Using our ability to imagine, we can tell fantasy-based stories, take inspiration from existing events, modify them to make new stories, and even paint imaginative pictures of how the future might look. Researchers from the University of South Carolina, United States, have now designed an artificial intelligence (AI) based algorithm, which can visualise the unseen.

The scientists claim that the images of red boats and blue cars can be decomposed and recombined to synthesise novel images of red cars using the AI developed by them. Their research has recently been published as a poster at the International Conference on Learning Representations 2021 and has been accepted for publication.

When our brain imagines something unreal, several neural networks are activated to enable it. While machines can perform several tasks much better than humans today, they still lack this basic characteristic of humans. So ideally, researchers are training the AI to replicate the human ability of imaginationto distil out attributes of colour, shape, texture, pose, position etc., from an object and use it to create new objects with novel characteristics.

"We were inspired by human visual generalisation capabilities to try to simulate human imagination in machines," says the study's lead author Yunhao Ge, a computer science PhD student. AI-based algorithms run through extrapolation. Given a large enough number of samples to process, an AI can generate novel samples from them while preserving required traits and synthesising new ones based on extrapolation. In this case, the AI is essentially trained to produce what we commonly know as deepfakes using a concept of disentanglement.

In recent years, deepfakes are among the most prominent features behind the increasing trend of spreading hoaxes and bogus content through the internet. It is a portmanteau of the words deep learningtraining an AI algorithmand fakes. You might have come across deepfakes in situations where you see a viral video of some celebrity speaking things that seems very unlike them. Upon further check, you might find fact-checks of the same viral video establishing that someone superimposed the spoken content onto the face of the said celebrity.

This process of substituting a persons identity (face) while preserving the original movement of the mouth is essentially disentanglement. Attributes of an image or video are being broken down into its simplest components like shape, colour, movement and are being used to synthesise novel content using computers.

In this study, the AI was provided with a group of sample images. It sequestered one image and mined for similarities and differences among the other images to produce an inventory of the available sample features. This is what the researchers call "controllable disentangled representation learning." Then, it recombines this available information to achieve "controllable novel image synthesis," or in human-like terms - imagination.

"For instance, take the Transformer movie as an example," said Ge, "It can take the shape of Megatron car, the colour and pose of a yellow Bumblebee car, and the background of New York's Times Square. The result will be a Bumblebee-colored Megatron car driving in Times Square, even if this sample was not witnessed during the training session."

This study is proof that no scientific advancement can be categorised as an absolute boon or bane. When placed in the wrong hands, the same technology can be used to produce deepfake images and videos and can spread misinformation and fake news. On the other hand, the researchers of this study have provided a potential use of the same for the greater good.

The application framework used here is compatible with nearly any type of data and opens up a range of possibilities. For example, this AI could be immensely helpful in the field of medicine. Doctors can potentially disentangle a drugs medicinal function from all other factors that are unique to the patient and design targeted drugs by recombining them with suitable factors of other patients.

"Deep learning has already demonstrated unsurpassed performance and promise in many domains, but all too often this has happened through shallow mimicry, and without a deeper understanding of the separate attributes that make each object unique," said Laurent Itti, a professor of computer science at the University of South California and the principal investigator of this study. "This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in A.I. systems, bringing them closer to humans' understanding of the world," he adds.

The study titled Zero-shot Synthesis with Group-Supervised Learning was published in the 2021 International Conference on Learning Representations and can be accessed here.

View post:

Scientists Train AI to Visualise the Unseen, Bringing Them Closer to the Human Understanding of the World | The Weather Channel - Articles from The...

Posted in Ai | Comments Off on Scientists Train AI to Visualise the Unseen, Bringing Them Closer to the Human Understanding of the World | The Weather Channel – Articles from The…

Galleries are using AI to measure the quality of art SET ME AFLAME – The Next Web

Posted: at 12:25 am

AI has set its destructive sights on one of lifes greatest pleasures: visiting galleries.

An Italian museum has started using AI-powered cameras tomeasure the attraction value of works of art.

The ShareArt devices collect visual data on spectators, such as how long they look at a painting and where on thecanvas their attention is focused.

Thanks to simple data elaboration, an observers gaze can be translated into a graphic, Stefano Ferriani, one of the researchers behind the project, told Bloomberg CityLab. We can detect where most of peoples attention is concentrated.

The system could help curators understand which artworks and layouts appeal to visitors. A useful purpose, I suppose but the tech fills me with dread.

Data analytics have influenced art for centuries, from counting footfall at theaters to projecting album sales.

In more recent years, the Relativity Media studio has been using predictive algorithms to select movies to produce.

Im not in this for the art, said Relativity founder Ryan Kavanaugh in 2012.

The company has since filed for bankruptcy twice.

In galleries,AI can help improve accessibility and makeexhibitions more interactive. But its a horribly reductive measurement of artistic value.

Our attention is often drawn to the controversial or bizarre before the subtle and thoughtful. Brilliant works could be overlooked because they dont generate sufficient engagement.

Furthermore, our expressions are, at best, an unreliable measurement of our feelings. We all show our emotions differently and algorithms often fail to discern them particularly when theyre applied to minority groups.

The ShareArt system is currently focused on gaze analysis, but with rules on masks easing, it could soon move on to facial gestures. That sounds like another good reason to wear a face covering even if COVID disappears.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to itright here.

Continue reading here:

Galleries are using AI to measure the quality of art SET ME AFLAME - The Next Web

Posted in Ai | Comments Off on Galleries are using AI to measure the quality of art SET ME AFLAME – The Next Web

The Role of AI in Recruitment (+ Top 7 AI Recruiting Tools) – ReadWrite

Posted: at 12:25 am

Artificial intelligence is gaining more and more attention. Intelligent self-learning programs disrupt many industries, including eCommerce, manufacturing and production lines, transportation, agriculture, logistics and supply chain, and more. Moreover, such programs automate redundant processes and dont require a high level of creativity, increasing its overall effectiveness.

It is difficult to think of an industry that AI will not transform. This includes healthcare, education, transportation, retail, communications, and agriculture. There are surprisingly clear paths for AI to make a big difference in all of these industries. Andrew Ng, Founder, and CEO of Landing AI

These disruptive forces have started hitting the HR industry as well. And its not just a trend; the innovations brought by AI are going to stay. Moreover, its anything but a temporary phenomenon. The most recent development in HR technology is AI in recruitment. The changes brought by AI in recruitment will be significant since many recruitment aspects have redundant, time-consuming tasks that can be easily automated.

Apart from that, AI can bring innovative solutions to the new-age emerging problems faced in HR and recruitment, like managing a multi-generational workforce, rising mental health issues, promotion, or inclusive culture.

The entire HR industry will be going under major changes while AI makes their jobs easier, faster, and better.

This article will explore the role of AI in recruitment, its possible use cases, top tools available in the market to automate recruitment processes, potential challenges attached with the adoption of AI, and its overall impact.

According to 52% of talent acquisition leaders, the most challenging part of recruitment is screening and short-listing candidates from a large talent pool. When integrated with applicant tracking software (ATS), an AI screening software can make hiring recommendations by utilizing data like candidates performance, merits, experience, etc.

The AI screening software can learn from existing candidates experience and skillsets and make recommendations accordingly.

Screening resumes still make up for the largest part of a recruiters daily schedule. Implementation of AI for resume screening can free up recruiters time to a great extent enabling them to screen effectively from the shortlisted candidates.

AI chatbots for recruitment work as recruiters assistant where chatbots can collect candidates basic information like education, experience and ask basic screening questions. Based on the inputs, the chatbot can rank the recruiter candidates saving their time and efforts.

Chatbots can also resolve candidates doubts that fall into frequently asked questions and set up interviews with worthy candidates, thereby automating almost 80% of top-of-the-funnel/pre-screening activities.

Sourcing candidates to build the recruitment pipeline is a time-consuming and challenging task. AI for candidate sourcing can extend recruiters reach as artificial intelligence can scan millions of profiles in a matter of seconds from multiple job portals, social media platforms, and more.

Normally, a recruiter takes about 6 secs to scan a single resume, and a single corporate job opening attracts around 250 resumes. Thats almost 3-4 working days worth of just scanning resumes for a single position. AI can automate this part of the process, and recruiters can focus on the hiring processs next levels.

Since AI can go through heaps and heaps of data in seconds, it will optimally use your companys database of past candidates. So, rather than spending a great deal of money on sourcing new candidates, you can utilize the existing candidate pool to find a strong candidate who might be fit for existing job openings.

Around 3.2% of the entire workforce of the USA works remotely. And almost 16% of companies hire remote worlds exclusively.

The numbers will grow from here on out since the pandemic redefined the working culture for many companies.

This means it will get harder for recruiters to hire people since hiring remote workers comes with its own set of challenges. However, using AI-powered pre-assessment tools, video interviews combined with a hint of AI, and more can enable recruiters to hire better and faster.

Recruiters can expand their efforts for diversity hiring by implementing AI-powered solutions. For example, a startup called Gapjumpersdotme uses AI to analyze your current screening, hiring, and promotion processes, blind hiring, and creating inclusive job ads.

Around 56% of candidates believe AI may be less biased than recruiters, and about 49% think that implementing AI might increase their chances of getting hired.

Recruiters might be aware of their conscious biases, but it might be possible to be unaware of subconscious biases. And its becoming more and more important to promote diversity in the workplace since it improves employee productivity, happiness, retention, creativity, and more.

Video interviews can save an ample amount of time for both the parties, candidates and employers. And video interviews with AI in the mix can analyze a candidates expressions, capture and analyze their moods, assess their personality traits, and more.

Big companies like Unilever, Dunkin Donuts, IBM, etc., already use AI in their video interviews and claim that it has increased ethnic and socioeconomic diversity in their hiring.

Your employee value propositions cant follow the doctrine of one fit for all. Every employee has a different set of skills, experience, and more importantly, their needs, goals, and aspirations are unique to them.

Whats valuable to one employee can be useless to the other. For example, an employee living within 2 miles from the office building wont use company vehicles or car services. On the other hand, an employee who lives in the suburbs with a good 25 miles between their house and the office will be grateful to have a company vehicle or car service.

Providing custom-tailored value propositions to your employees can result in improved productivity and a happier workforce.

AI can analyze your employees behavior, personality traits, and more to personalize your companys value proposition to an employee.

Such personalization can help you craft lucrative offers for the high-demand candidates, directly improving your overall conversion rates.

Background check is an essential part of the hiring process. This is the stage where the recruiter verifies the candidates credentials, experience, education, etc. Conventionally, this process is quite tedious and time-consuming.

But with AI-powered tools, this process can be made efficient, private, unbiased, and faster.

Checkr is one such platform that automates the process of background checks with its AI-powered solutions. Their solutions are used by companies like Netflix, Airbnb, Instacart, etc.

Companies like Nike, General Dynamics, Intel, and Wayfair, Hiretual helps your talent sourcing teams source and engage the right applicants for opening job positions. Their tools provide a set of features like AI sourcing, market insights for creating effective hiring strategies, personalized outreach with candidates, diversity & inclusive hiring, and 30+ ATS integrations that fetch the right data in real-time.

Used by companies like McDonalds, MARS, MOL Group, and ExxonMobil, XOR helps attract, engage, and recruit candidates efficiently. They offer features like live chat, virtual career fairs, WhatsApp campaigns, on-demand video interviews, recruiter and HR connect, and more.

XOR modernizes your recruitment processes and allows your HR and recruitment teams to leverage the power of AI, the internet, and social media platforms from one platform.

Used by companies like AirAsia, TATA Communications, HULU, and Twilio, Eightfold uses AI to guide your career site visitors to the right job openings. Further, they provide talent acquisition, talent diversity, talent management, and talent experience solutions to streamline every recruitment process aspect.

The conversion rates drastically increase with Eightfold.ai in the picture because of their predictions.

Humanly is a recruitment tool that majorly focuses on diversity hiring and seamlessly integrates with top ATS vendors. With their AI-powered chatbots, your recruitment team can automate candidates screening and interview scheduling in a DEI (diversity ethnicity inclusivity) friendly manner.

Used by Armoire, Tiny Pulse, Swiss Monkey, Oakland Roots, and BPM, Humanly claims to have saved over 60 hours scheduling and screening per job opening and 95% completion of background checks within 48 hours.

Used by companies like Chick-a-fila, Six Flags, Ocada Group, Agoda, and B&M, MyInterview leverages the power of AI and machine learning to analyze candidates answers for professionalism, reasoning, and more.

Their tools provide candidate insights using deep analytics and empower recruiters to customize their candidates experiences. For example, you can review interviews, shortlist candidates, collaborate with stakeholders more efficiently.

Used by Amazon, Onpartners, CT Assist, Three Pillars, and more, Loxo is a hiring and recruitment automation platform. It is a complete HR CRM that comes equipped with recruitment CRM, a talent intelligence system, and an application tracking system.

Known for their modern features like smart grids and task management, Loxo claims to have an updated database of over 530 million people and over 98% customer satisfaction rate.

Used by the likes of Twitter, Salesforce, Waymo, and Rover, Seekout is an AI-powered talent search engine that lets you search for talent in a way thats comfortable for you. You can use their search engine for direct search, boolean search, and apply power filters or leverage AI matching benefits.

Seekouts AI can shortlist candidates based on the job description added. This list is curated either from its own 500 million talent profiles or from your ATS. Additionally, you can reach out to the candidates with personalized messages and automated outreach sequences.

Other prominent features that they provide are talent analysis, candidate engagement, diversity hiring, and more.

You can also download their chrome extension that provides an added level of ease to your recruitment process.

Roughly 75% to 88% of candidates applying for a particular position are underqualified or not a right fit. And for every job opening, it takes recruiters a good 23 hours to scan through all the applications and resumes.

More importantly, the volume of applications will increase in the near future, considering the recent developments in the unemployment rates because of the pandemic. At the same time, the recruitment and HR teams are predicted to remain the same size.

This only means that theyll have to put in more effort with fewer than the necessary number of the hands-on deck.

Since AI can automate candidates screening and scheduling, it saves quite a lot of time and recruiters resources. And since youll be faster and efficient with your processes, the chances of losing a good candidate to your competitors become quite low.

The talent pool gets bigger and bigger every year unless the job listings are for unconventional roles. Then, the recruiters have to screen the entire talent pool to get to the right batch of candidates, which is tiring and time-consuming.

With AI in screening, recruiters will get a list of shortlisted candidates perfect for the jobs. Then, they will have to screen them further on the impersonal attributes like cultural fit, previous organizational behavior, etc. Needless to say, this drastically improves the quality of the hired candidates.

AI-backed tools and software come equipped with high-level analytics, providing recruiters with insights about every aspect of their process. Using these insights, they can easily optimize their operations and make the most of their time and resources.

AI tools like chatbots used for the pre-screening stage are online 24/7. This means whenever any candidates reach out to apply, ask questions, etc., the chatbots are there to clear their doubts and provide the necessary information regarding the job role and responsibilities, company overview, and more.

AI can provide candidates with detailed support throughout the application process, guiding them whenever they get stuck or solving their queries.

This improves their overall experience with the recruitment process and creates a positive image for your brand. This can translate into getting more referrals from the candidates, making it easier for recruiters to expand their reach without investing time or money.

Chatbot conversations can lack the human touch, can be perceived as robotic, and it is quite possible that the bots might not be able to parse human lingo, cultural context, everyday slang, etc.

Moreover, around 80% of candidates would prefer to have human interactions over AI-powered chats with bots.

The candidates experience might get impacted negatively by the use of AI.

To mimic human intelligence, AI-powered programs require the input of a great deal of data. If not, then you cant expect accuracy in results generated from implementing AI-powered programs and tools.

AI-powered tools are created based on previous data. The data originated from years of recruiters screening of candidates. The chances of recruiters having tapped into their unconscious bias in their screening and hiring candidates are quite high.

So, the possibility of AI learning human biases is not lost on us.

The only way to ensure that your AI tools are not replicating human biases is to have a vendor thats well-aware of such issues and has taken steps to remove patterns of such biases.

Changing old ways, adopting new technologies and practices can be hard, in general. As a result, it is quite possible that HR and recruitment professionals might take these tools and software with a grain of salt.

Learning to work with a new tool and process is as hard as unlearning old ways and processes.

AI can be unreliable in candidate screening, especially when it encounters unconventional resumes in new formats or fonts. For example, when the AI cant recognize the resumes pattern, it might end up rejecting a well-suited candidate for the position.

Its easier forAI to understand data like years of experience, education level, etc. But when it comes to impersonal attributes like cognitive aptitude, personality traits, cultural fit, soft skills, etc. might get lost on AI.

This means you can lose an excellent candidate if your entire requirement process is relying on AI-powered tools.

Nothing is perfect. Everything comes with its own set of pros and cons. AI in recruitment is the same.

On the one hand, you have recruitment not quite willing to adopt the latest technologies or fearing that itll replace them, which is quite true. But on the other hand, this will essentially lead to the increased importance of human work and human touch in the recruitment process.

In the entirety, the pros ofimplementing AI in recruitment outweigh the cons by large. However, lets not forget that its pros are not limited to recruiters; AI positively impacts candidates experience, improves organizations efficiency and cultures, and helps form better teams.

The cost of not keeping up with the technology can be quite high, especially if your industry is saturated with many big players.

The key still remains in finding the right balance between automation and manual work. Keep the processes human requires a high level of creativity, empathy, and analyzing intangible attributes. Automate the ones that are repetitive and time-consuming.

Read the rest here:

The Role of AI in Recruitment (+ Top 7 AI Recruiting Tools) - ReadWrite

Posted in Ai | Comments Off on The Role of AI in Recruitment (+ Top 7 AI Recruiting Tools) – ReadWrite

Flush with funding, AI startup led by NCSU professor is hiring in the Triangle – WRAL Tech Wire

Posted: at 12:25 am

RALEIGH InsightFinder has raised an additional $2,010,000 from 10 investors, including IDEA FUND Partners.

In an SEC Filing, the company noted the round involved an option, warrant, or other right to acquire another security, as well as security to be acquired upon the exercise of option, warrant, or other right to acquire security.

The company was founded by Dr.Helen Gu, a professor at N.C. State, and provides software to clients like Dell, Credit Suisse, and China Mobile that detect system anomalies without thresholds. The firm says that this results in predicting severe incidents hours before they happen, and that the technology can automatically pinpoint root causes.

The company is definitely hiring in the Triangle, said Gu. InsightFinder is under rapid growth and we see increasing demand from customers because almost every business relies on a reliable IT system to function and InsightFinder provides the right product for this pressing market demand.

NCSU professors artificial intelligence startup raises $2M including Valley money

According to a statement from the company, the funding comes fromFellows.Fund, Silicon Valley Future Capital, Eastlink Capital, and Brightway Future Capital. IDEA Fund Partners also participated in the round, said Lister Delgado, managing partner.

We funded the company not too long ago, said Delgado. That prior funding came in January 2021 and was $725,000 from seven investors, according to an SEC filing.

This is an expansionof that funding and was a bit opportunistic, said Delgado about the most recent raise. It is a highly reputable group of investors that we wanted to add to the cap table. We were not really pursuing the money.

Delgado is a member of the companys board of directors.

See the original post:

Flush with funding, AI startup led by NCSU professor is hiring in the Triangle - WRAL Tech Wire

Posted in Ai | Comments Off on Flush with funding, AI startup led by NCSU professor is hiring in the Triangle – WRAL Tech Wire

Getting Started with Legal Contract Management Software and AI – JD Supra

Posted: at 12:25 am

Legal contract management software can drastically streamline contract creation, review, execution and management processes that are often fraught with complications and errors.

Data from the World Commerce & Contracting Association supports this idea. The organization recently surveyed its 70,000+ members about their contract challenges and priorities and found that 85% experience pressure for contract simplification. Another 81% said they have plans to implement contract automation. These points speak to the fact that poorly managed contracts lead to lost revenue, higher costs and more time devoted to manual tasks for all parties involved.

To understand the actual value of legal contract management software, its helpful to recap the inefficiencies associated with contract handling.

Manual processes open the door for errors and slow down overall contract execution. For example, approvals and negotiations done via email are often sluggish or overlooked. Untracked revisions can lead to confusion, conflict or non-compliance and a lack of standard legal language may result in lengthy review times or require lawyers to get involved.

Disparate repositories result in inefficient reporting and reduce contract visibility. Contracts spread out over different repositories, departments and geographical locations make monitoring corporate contracts holistically almost impossible. Without tracking expiring contracts and renewals, companies run the risk of compliance exposure as well as revenue loss.

Changes occur over the lifetime of a contract, including renewal dates, pricing, emerging legal requirements and other events. They require amendments and approvals from the contract parties. If these changes arent managed, implemented and communicated correctly and quickly, organizations can increase compliance risks for themselves and all parties involved.

Legal contract management software can reduce the average hours spent on contracts by 20%, accelerate review and save on costs. It does this by:

CLM centralizes contract storage and automates the request, creation, negotiation, execution and management of any type of contract.

When you combine AI with CLM, you can lower the number of contracts needing to be reviewed. This gives the reviewer the ability to speed up a review and provide consistency across processes. AI also significantly enhances contract management after execution by extracting and obtaining usable data from executed, legacy and third-party paper contracts.

Not all CLM AI is created the same. To get the full benefits of contract lifecycle management solutions, you should carefully evaluate AI for both the pre- and post-signature phases of contract management.

Read the original post:

Getting Started with Legal Contract Management Software and AI - JD Supra

Posted in Ai | Comments Off on Getting Started with Legal Contract Management Software and AI – JD Supra

What to do when AI brings more questions than answers – VentureBeat

Posted: July 18, 2021 at 5:43 pm

All the sessions from Transform 2021 are available on-demand now. Watch now.

The concept of uncertainty in the context of AI can be difficult to grasp at first. At a high level, uncertainty means working with imperfect or incomplete information, but there are countless different potential sources of uncertainty. Some, like missing information, unreliable information, conflicting information, noisy information, and confusing information, are especially challenging to address without a grasp of the causes. Even the best-trained AI systems cant be right 100% of the time. And in the enterprise, stakeholders must find ways to estimate and measure uncertainty to the extent possible.

It turns out uncertainty isnt necessarily a bad thing if it can be communicated clearly. Consider this example from machine learning engineer Dirk Elsinghorst: An AI is trained to classify animals in a safari to help safari-goers remain safe. The model trains with available data, giving animals a risky or safe classification. But because it never encounters a tiger, it classifies tigers as safe, drawing a comparison between the stripes on tigers and on zebras. If the model were able to communicate uncertainty, humans could intervene to alter the outcome.

There are two common types of uncertainty in AI: aleatoric and epistemic. Aleatoric accounts for chance, like differences in an environment and the skill levels of people capturing training data. Epistemic is part of the model itself models that are too simple in design can have a high variation in outcome.

Observations, or sample data, from a domain or environment often contain variability. Typically referred to as noise, variability can be due to natural causes or an error, and it impacts not only the measurements AI learns from but the predictions it makes.

In the case of a dataset used to train AI to predict species of flowers, for instance, noise could be larger or smaller flowers than normal or typos when writing down the measurements of various petals and stems.

Another source of uncertainty arises from incomplete coverage of a domain. In statistics, samples are randomly collected, and bias is to some extent unavoidable. Data scientists need to arrive at a level of variance and bias that ensures the data is representative of the task a model will be used for.

Extending the flower-classifying example, a developer might choose to measure the size of randomly selected flowers in a single garden. The scope is limited to one garden, which might not be representative of gardens in other cities, states, countries, or continents.

As Machine Learning Masterys Jason Brownlee writes: There will always be some unobserved cases. There will be part of the problem domain for which we do not have coverage. No matter how well we encourage our models to generalize, we can only hope that we can cover the cases in the training dataset and the salient cases that are not.

Yet another dimension of uncertainty is errors. A model will always have some error, introduced during the data prep, training, or prediction stages. Error could refer to imperfect predictions or omission, where details are left out or abstracted. This might be desirable by selecting simpler models as opposed to models that may be highly specialized to the training data, the model will generalize to new cases and have better performance.

Given all the sources of uncertainty, how can it be managed particularly in an enterprise environment? Probability and statistics can help reveal variability in noisy observations. They can also shed light on the scope of observations, as well as quantifying the variance in performance of predictive models when applied to new data.

The fundamental problem is that models assume the data theyll see in the future will look like the data theyve seen in the past. Fortunately, several approaches can reliably sample a model to understand its overall confidence. Historically, these approaches have been slow, but researchers at MIT and elsewhere are devising new ways to estimate uncertainty from only one or a few runs of a model.

Were starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences, Alexander Amini, who recently presented research on a new method to estimate uncertainty in AI-assisted decision-making, said in a statement. Any user of the method, whether its a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision. He envisions the system not only quickly flagging uncertainty, but also using it to make more conservative decision making in risky scenarios, like when an autonomous vehicle approaches an intersection. Any field that is going to have deployable machine learning ultimately needs to have reliable uncertainty awareness.

Earlier this year, IBM open-sourced Uncertainty Quantification 360 (UQ360), a toolkit focused on enabling AI to understand and communicate its uncertainty. UQ360 offers a set of algorithms and a taxonomy to quantify uncertainty, as well as capabilities to measure and improve uncertainty quantification (UQ). For every UQ algorithm provided in the UQ360 Python package, a user can make a choice of an appropriate style of communication by following IBMs guidance on communicating UQ estimates, from descriptions to visualizations.

Common explainability techniques shed light on how AI works, but UQ exposes limits and potential failure points, IBM research staff members Prasanna Sattigeri and Q. Vera Liao note in a blog post. Users of a house price prediction model would like to know the margin of error of the model predictions to estimate their gains or losses. Similarly, a product manager may notice that an AI model predicts a new feature A will perform better than a new feature B on average, but to see its worst-case effects on KPIs, the manager would also need to know the margin of error in the predictions.

In a recent study, Harvard University assistant professor Himabindu Lakkaraju found that showing uncertainty metrics to people with a background in machine learning and non-experts had an equalizing effect on their resilience to AI predictions. While fostering trust in AI may never be as simple as providing metrics, awareness of the pitfalls could go some way toward protecting people from machine learnings limitations a critical aim in the business domain.

See the original post:

What to do when AI brings more questions than answers - VentureBeat

Posted in Ai | Comments Off on What to do when AI brings more questions than answers – VentureBeat

Spielberg’s AI at 20: The best film about the afterlife of gadgets – CNET

Posted: at 5:43 pm

Spielberg's AI: Artificial Intelligence just gets better with age.

It's been 20 years, and I'm still not sure what to make of the movie AI: Artificial Intelligence. But I keep rewatching it every year or two, and it always haunts me. I think I know why.

Steven Spielberg's completion of an idea first dreamed up by Stanley Kubrick arrived at the end of June 2001. It was Spielberg's first film after 1998's Saving Private Ryan. I watched it in a movie theater in Los Angeles, when I lived out west. I remember the film's strangeness washing over me in the dark.

Is AI a commentary on Spielberg's own childhood wish fulfillments? An inversion of the films I saw of his when I was a kid? A blend of his wide-eyed emotional spirit, and his cynical, dark films on war? I watch it because it reminds me, over and over, of the future of gadgets when humanity dies.

The film's about a beta test of a robotic child called David, who is briefly adopted and cared for by an employee of the company that made him (It? What is a robot's proper pronoun?), a surrogate child while their own son is in a medically induced coma. Their real child then recovers, and the familyrejects David, no longer needing him, even finding him threatening and dangerous... and they abandon him. From there, the film becomes an odyssey in which the robotic boy learns about the cruel, changed world and tries to find his maker. It's Pinocchio, but it's also a story about a tech company that overreaches to achieve perfection. It's Jurassic Park, but the dinosaurs outlive the humans and we see where they end up in another 2,000 years.

Spielberg blended with Kubrick seems a weird cocktail: I think of Kubrick as a brilliantly cold filmmaker, while Spielberg films I grew up lean to melodramatic emotional swells. But as I've gotten older, my favorite Spielberg is cold Spielberg (Munich, The Post, Minority Report, Bridge of Spies). The icy tone running through AI, even 20 years later, still feels futuristic. I feel like I'm peering through a door into the unknown.

Entertain your brain with the coolest news from streaming to superheroes, memes to video games.

Plenty of people hate Spielberg's AI and it doesn't rank all that high in many of his all-time lists. On some days, it's one of my favorite science fiction films ever. But there are bumps. Some moments ring awkward and cheesy (the emotional journey of his "parents," and many parts involving real people in the theme park-like Rouge City). The film's emotional journey, bridging fairy tale and gritty cyberpunk, has cracks (some scenes feel like they linger too long, others jump forward too fast). The depiction of tech hasn't always aged well (no one has phones, but also, a key plot point involves a kiosk that acts as an elaborate search engine. Why wouldn't anyone be able to do this with a device?). An astounding percentage of the two hour-plus film seems to take place in an extended ending that moves forward with excruciating slowness. Yet I'm always riveted.

Along with Minority Report, released the following summer in 2002, this film represents Spielberg's dark sci-fi phase. AI and Minority Report feel like bookends, companion films. AI lingers with me far more. And I haven't even mentioned David's robot teddy-bear companion, or Jude Law's stunning robot Gigolo Joe, and how the three of them feel like some sort of deep-future retelling of The Wizard of Oz.

It's because it's a story about abandoned tech. David is a gadget prototype. He finds himself wondering about his own existence, and can't justify the answers. No one can. It's a story that dreams of where all our supposedly fantastic tech toys go in the years, and decades, that follow. The old Anki Cozmos and Jibos, the social networks and game platforms I imagine crumbling to dust. Some will remain. Some will be Swiss-cheesed. Some will linger. Some will be reinvented, the parts tinkered with and hacked.

Entertain your brain with the coolest news from streaming to superheroes, memes to video games.

Movies like Wall-E have dreamed of similar ideas. As has tons of science fiction --Cory Doctorow, Ted Chiang and Annalee Newitz come to mind, but there are many more.

AI's cold-souled presence also feels like a final twist of the knife in my childhood. Those '80s family-friendly films Spielberg crafted linger in the first half of AI. The feeling is manufactured, though. David's placement in his family is an experiment, a forced action. It's cruel and doesn't consider anything other than the present moment. And then, like my favorite fiction (Neal Stephenson's love of accelerating thousands of years in Seveneves or Anathem, or the jumps in Foundation, or Charles Stross' Accelerando), AI jumps impossibly far along. The ending isn't weird or infinite enough for me. But it suggests that feeling of cosmic horror for tech's future that I've thought about when I look at small, strange, emerging products, new AR headsets, little watches or toy robots with firmware updates.

So much of AI still seems prescient. The flooded cities and climate-crisis-driven wreckage. The undercurrent of public distrust of tech, and a human-centric type of robot-targeted racism that feeds evangelical rallies. A Steve Jobs-like creator of new tech who plays God with calm conviction. Also, of course, the very idea of feeling emotional connection with a robot.

I don't know if any film or TV show has ever captured artificial intelligence perfectly for me. (2001 is good, of course. Ex Machina didn't wow me, and I don't tend to love films about robots.) Robotics and software are tough territories. But I'm perpetually amazed by Haley Joel Osment's performance in this movie. It annoyed me at times when I first saw it, just a few years after The Sixth Sense. Was I meant to care, or feel repelled? Now it feels like an amazing balancing act between emotional charm and alienation. Osment's waxy face, eerie smile and continuous need to be loved are perfect.

Because AI imagines itself as a dark fairy tale, I forgive its sometimes illogical plot turns. I burst into tears at times: when David is alone at the bottom of the ocean, praying for a miracle. His wish is granted, but just for a moment. Some scenes, like the one in which David confronts his creators, or almost kills his brother, still shock me with some of their cold vibes. It's this dance of emotions that keeps me coming back.

Or maybe it's because AI is, in one sense, a nightmarish future version of my New Jersey work commute to Manhattan. The film takes place in New Jersey, in some future where New York City is ruined. We watch a robot child wander from the suburbs into the heart of where New York City still half-stands.

As I've gotten older, I've also seen the film differently. When I was living alone in LA and wandering, uncertain of my career and life, I thought it was about the emotional lives of robots. Later, when I became a parent, I saw it as a tale about parenthood and consumerism. Would I buy a robot? What would that do to my family? Why do I buy so much tech? Now I see it as a story of how humanity can't stop playing God. David's return to Cybertronics, and his whole journey, feel like a manipulation. And the ending after that, where David is brought back to life, is set in a world where only "mecha" have survived. But these evolved robots do exactly what we used to do: simulate life, experiment with creation.

Is David really thinking and feeling, or is it a simulation all along? Are we part of a filmic Turing test? I turn that over in my head. And what is a gadget, or a creation, without its creator? A novella by Ted Chiang, called The Lifecycle of Software Objects, imagined intelligent creations that were eventually abandoned, obsolete, and had to be cared for as the world they were made to be compatible with kept changing. AI asks these questions: all the old robots rounded up, the models that know that sooner or later they're going to be replaced. David, the robot boy who seems to be so special, is particularly so because he's oblivious to this process.

AI is a flawed vision of the future, and maybe it was never destined to be perfect science fiction. The future is an unknown. Months after AI came out, I flew back to New York to be with my family after the Sept. 11 attacks. In Spielberg's movie, the Twin Towers still exist in frozen Manhattan, 2,000 years from now. I see that artifact of another timeline now and it reminds me of how much time has passed since 2001. How much the world has changed.

In 2021, though, we're more concerned about the climate crisis than ever. We haven't figured out how to resolve our psychological dependencies on tech. And tech companies are trying now more than ever to mine empathy and emotional connection through products. The basic premise of AI hasn't aged. It's just got a little dust on its box.

(By the way, if you want to read a great book about actual artificial intelligence, start with this one by Janelle Shane.)

Here is the original post:

Spielberg's AI at 20: The best film about the afterlife of gadgets - CNET

Posted in Ai | Comments Off on Spielberg’s AI at 20: The best film about the afterlife of gadgets – CNET

As the Use of AI Spreads, Congress Looks to Rein It In – WIRED

Posted: at 5:43 pm

Theres bipartisan agreement in Washington that the US government should do more to support development of artificial intelligence technology. The Trump administration redirected research funding toward AI programs; President Bidens science adviser Eric Lander said of AI last month that Americas economic prosperity hinges on foundational investments in our technological leadership.

At the same time, parts of the US government are working to place limits on algorithms to prevent discrimination, injustice, or waste. The White House, lawmakers from both parties, and federal agencies including the Department of Defense and the National Institute for Standards and Technology are all working on bills or projects to constrain potential downsides of AI.

Bidens Office of Science and Technology Policy is working on addressing the risks of discrimination caused by algorithms. The National Defense Authorization Act passed in January introduced new support for AI projects, including a new White House office to coordinate AI research, but also required the Pentagon to assess the ethical dimensions of AI technology it acquires, and NIST to develop standards to keep the technology in check.

In the past three weeks, the Government Accountability Office, which audits US government spending and management and is known as Congress watchdog, released two reports warning that federal law enforcement agencies arent properly monitoring the use and potential errors of algorithms used in criminal investigations. One took aim at face recognition, the other at forensic algorithms for face, fingerprint, and DNA analysis; both were prompted by lawmaker requests to examine potential problems with the technology. A third GAO report laid out guidelines for responsible use of AI in government projects.

Helen Toner, director of strategy at Georgetowns Center for Security and Emerging Technology, says the bustle of AI activity provides a case study of what happens when Washington wakes up to new technology.

As this technology is being used in the real world you get problems that you need policy and government responses to.

Helen Toner, director of strategy, Georgetown Center for Security and Emerging Technology

In the mid-2010s, lawmakers didnt pay much notice as researchers and tech companies brought about a rapid increase in the capabilities and use of AI, from conquering champs at Go to ushering smart speakers into kitchens and bedrooms. The technology became a mascot for US innovation, and a talking point for some tech-centric lawmakers. Now the conversations have become more balanced and business-like, Toner says. As this technology is being used in the real world you get problems that you need policy and government responses to.

Face recognition, the subject of GAOs first AI report of the summer, has drawn special focus from lawmakers and federal bureaucrats. Nearly two dozen US cities have banned local government use of the technology, usually citing concerns about accuracy, which studies have shown is often worse on people with darker skin.

The GAOs report on the technology was requested by six Democratic representatives and senators, including the chairs of the House oversight and judiciary committees. It found that 20 federal agencies that employ law enforcement officers use the technology, with some using it to identify people suspected of crimes during the January 6 assault on the US Capitol, or the protests after the killing of George Floyd by Minneapolis police in 2020.

Fourteen agencies sourced their face recognition technology from outside the federal governmentbut 13 did not track what systems their employees used. The GAO advised agencies to keep closer tabs on face recognition systems to avoid the potential for discrimination or privacy invasion.

The GAO report appears to have increased the chances of bipartisan legislation constraining government use of face recognition. At a hearing of the House Judiciary Subcommittee on Crime, Terrorism, and Homeland Security held Tuesday to chew over the GAO report, Representative Sheila Jackson Lee (DTexas), the subcommittee chair, said that she believed it underscored the need for regulations. The technology is currently unconstrained by federal legislation. Ranking member Representative Andy Biggs (RArizona) agreed. I have enormous concerns, the technology is problematic and inconsistent, he said. If were talking about finding some kind of meaningful regulation and oversight of facial recognition technology then I think we can find a lot of common ground.

See more here:

As the Use of AI Spreads, Congress Looks to Rein It In - WIRED

Posted in Ai | Comments Off on As the Use of AI Spreads, Congress Looks to Rein It In – WIRED

Beware explanations from AI in health care – Science

Posted: at 5:43 pm

Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users' skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Read more from the original source:

Beware explanations from AI in health care - Science

Posted in Ai | Comments Off on Beware explanations from AI in health care – Science

Page 119«..1020..118119120121..130140..»