Page 93«..1020..92939495..100110..»

Category Archives: Ai

How Etsy Is Giving AI To Its Army Of 5 Million Artisan-Entrepreneurs To Build The Anti-Amazon – Forbes

Posted: October 21, 2021 at 10:29 pm

Gabby Jones for Forbes

On April 2, 2020, during the chaotic early days of the pandemic, Etsy CEO Josh Silverman awoke to a sales shock.

Every four hours, the companys data-junkie boss received an update on the volume of personalized pillows, hand-sewn stuffed animals, vintage Victorian lockets and millions of other one-off items sold through the digital marketplace. Silverman had been rushing to cut Etsys marketing spending to prepare for a Covid-induced slump. But the latest report showed a surge.

The source: face masks. The press was reporting that the federal Centers for Disease Control was soon to recommend face coverings for all Americans. With inventory already difficult for first responders to find, civilians were flocking to Etsys ragtag community of hobbyists for their pandemic protection. Until that day, if you searched Etsy for mask youd see Halloween costumes or face cream, says Silverman, sitting cross-legged on a handmade modern wingback chair in Etsys still-deserted Brooklyn headquarters. We had an emergency meeting to decide whether to double down on masks.

The team was split. Some saw the face mask market as a fad. To others, it offered a chance for Etsy to show off the power and flexibility of its decentralized, nearly 3 millionstrong seller community. This was our Dunkirk, where we could mobilize cottage industry to come to the rescue, says Silverman, who is 52. The worlds supply chains had locked up. You couldnt get face masks. Yet Etsys supply chain was just two hands making.

Etsy rallied its sellers, emailing them info on mask materials and designs. Programmers retooled the site toward selling the PPE; the marketing team peppered the web and social media with ads. Within a day, an army of 10,000 independent crafters were hawking masks on Etsy. Within two weeks, 100,000 sellers were. By the end of 2020, Etsy had moved more than $740 million worth of masksaccounting for 7% of its $10.3 billion in gross sales (the value of everything sold on the site; Etsy takes a cut of each sale). It turned out to be perfectly positioned for the pandemic: Sellers had more time to craft furniture, art and toys, and quarantined customers were looking to buy it all. Etsys annual revenue increased 111%, to $1.7 billion; net income was up 264%.

Home suddenly became your office, playground and day-care center, says Jefferies analyst John Colantuoni. That drove demand for unique and handmade products.

Since Covids March 2020 lows, Etsy shares are up some 600%, torching the Nasdaq (up 115%), eBay (175%), Walmart (35%) and Amazon (100%). The 16-year-old company is worth just shy of $30 billion. Active buyers and sellers on Etsy have doubled to 90 million and 5 million, respectively. As with most digital retailers, growth has slowed in the second half of 2021 as the economy has reopened, but analysts are betting Etsy will hit a 30% sales increase in 2021.

This was our Dunkirk, where we could mobilize cottage industry to come to the rescue.

Let Amazon, Walmart and Target battle to deliver mass-produced items as cheaply and quickly as possible. Etsy has empowered an eclectic (and mostly female) community of crafters with the same cutting-edge AI, data science and marketing tools that the retail giants use. In doing so, Etsy, a member of Forbes Just 100 list of the top corporate citizens, has provided millions of moonlighters with crucial incomeand purposein a time of unprecedented layoffs, lockdowns and dislocations. Says Silverman, Our mission is to keep commerce human.

Etsy has always been the crunchy kid at the country club. Founded by Brooklyn artisan Rob Kalin, it churned through CEOs before going public in 2015 as a Certified B Corporation beholden to strict environmental and community standards. Wall Street hated its do-gooder stanceand red ink. In 2016, Etsys net losses grew 45%, to $54 million. The next year, investors Black-And-White Capital, TPG and Dragoneer bought up shares, hoping to force Etsy to sell itself. Etsy pushed back. The board scrambled for a CEO to balance its mission-based employees and its money-obsessed investors. Silverman, who had joined the board in 2016, seemed a good fit.

Raised in Ann Arbor, Michigan, Silverman got a BA in public policy at Brown in 1991, worked for progressive New Jersey Senator Bill Bradley and later earned a Stanford MBA. In 1998 he cofounded Evite, the online invitation manager, before spending five years leading eBay marketplaces abroad. He turned around a struggling Skype in 2008 and later ran American Expresss credit-card business from 2011 to 2015.

With a maniacal focus on upping Etsys gross sales, he quickly slashed staff, departed most international regions and cut projects that wouldnt create at least $10 million in gross sales. That included Etsy Studios, a craft supply website that 150 people, about 15% of Etsys total staff, had spent 18 months building. It was as painful as it sounds, a real gut punch, Silverman says. We encouraged people who were motivated and believed to stay, and those skeptical to leave.

He improved Etsys search tools, scrapped in-house servers for the cloud and invested in customer service. By 2019, Etsys market cap had risen 300%, to $5 billion. In all, since Silverman took the helm, shares have returned some 1,800%. Prior to the pandemic, Josh did a great job focusing on the things that moved the needle on gross sales, says Citi analyst Nicholas Jones. It positioned Etsy to benefit from the demand surge.

One challenge: enabling customers to find a one-of-a-kind product on Etsy as easily as they can a commodity on Amazon. To improve search and product recommendations, its building AI-powered computer vision tools to identify, tag and create structured data for its millions of unique items.

Notorious for slow deliveries, Etsy is also raising expectations for sellers. Crafters are being pushed to provide transparent timelines and improve customer communication. A new dashboard will show vendors how they rate for customer service and satisfaction. Overachievers will get higher visibility on the site. We need to do what our sellers need, not want, Silverman says. To serve the sellers, you need to obsess over the buyer experience.

The rest is here:

How Etsy Is Giving AI To Its Army Of 5 Million Artisan-Entrepreneurs To Build The Anti-Amazon - Forbes

Posted in Ai | Comments Off on How Etsy Is Giving AI To Its Army Of 5 Million Artisan-Entrepreneurs To Build The Anti-Amazon – Forbes

Artificial Intelligence project aims to improve standards and development of AI systems – University of Birmingham

Posted: at 10:29 pm

A new project has been launched in partnership with the University of Birmingham aiming to address racial and ethical health inequalities using artificial intelligence (AI).

STANDING Together, being led by University Hospitals Birmingham NHS Foundation Trust (UHB), aims to develop standards for datasets that AI systems use, to ensure they are diverse, inclusive and work across all demographic groups. The resulting standards will help regulators, commissioners, policymakers and health data institutions assess whether AI systems are underpinned by datasets that represent everyone, and dont leave underrepresented or minority groups behind.

Xiao Liu, Clinical Researcher in Artificial Intelligence and Digital Healthcare at the University of Birmingham and UHB, and STANDING Together project co-leader, said: Were looking forward to starting work on our project, and developing standards that we hope will improve the use of AI both in the UK and around the world. We believe AI has enormous potential to improve patient care, but through our earlier work on producing AI guidelines, we also know that there is still lots of work to do to make sure AI is a success stories for all patients. Through the STANDING Together project, we will work to ensure AI benefits all patients and not just the majority.

NHSX NHS AI Lab, the NIHR, and the Health Foundation have awarded in total 1.4m to four projects, including STANDING Together. The other organisations working with UHB and the University of Birmingham on STANDING Together are the Massachusetts Institute of Technology, Health Data Research UK, Oxford University Hospitals NHS Foundation Trust, and The Hospital for Sick Children (Sickkids, Toronto).

The NHS AI Lab introduced the AI Ethics Initiative to support research and practical interventions that complement existing efforts to validate, evaluate and regulate AI-driven technologies in health and care, with a focus on countering health inequalities. Todays announcement is the result of the Initiatives partnership with The Health Foundation on a research competition, enabled by NIHR, to understand and enable opportunities to use AI to address inequalities and to optimise datasets and improve AI development, testing and deployment.

Brhmie Balaram, Head of AI Research and Ethics at NHSX, said: We're excited to support innovative projects that demonstrate the power of applying AI to address some of our most pressing challenges; in this case, we're keen to prove that AI can potentially be used to close gaps in minority ethnic health outcomes. Artificial intelligence has the potential to revolutionise care for patients, and we are committed to ensuring that this potential is realised for all patients by accounting for the health needs of diverse communities."

Dr Indra Joshi, Director of the NHS AI Lab at NHSX, added: As we strive to ensure NHS patients are amongst the first in the world to benefit from leading AI, we also have a responsibility to ensure those technologies dont exacerbate existing health inequalities.These projects will ensure the NHS can deploy safe and ethical Artificial Intelligence tools that meet the needs of minority communities and help our workforce deliver patient-centred and inclusive care to all.

The rest is here:

Artificial Intelligence project aims to improve standards and development of AI systems - University of Birmingham

Posted in Ai | Comments Off on Artificial Intelligence project aims to improve standards and development of AI systems – University of Birmingham

[Video] Here’s Why You Need to Tune In to the Samsung AI Forum 2021 – Samsung Newsroom

Posted: at 10:29 pm

Each year, the Samsung AI Forum (SAIF) gathers world-renowned academics and industry experts to discuss the latest developments in the field of artificial intelligence (AI). This years event will run from November 1st to 2nd and will be broadcast live via Samsung Electronics YouTube channel.

To offer viewers a glimpse of the exciting topics that will be discussed at SAIF 2021, Samsung has released a pair of teaser videos previewing the two-day events distinguished speakers and sessions.

Those who are interested can register to participate through the Samsung AI Forums website up until the day of the event. Those who do so will be able to access SAIFs schedule and submit questions for the experts before the event kicks off. In the meantime, check out the videos below for a preview of what SAIF 2021 has in store, and stay tuned to Samsung Newsroom for more updates.

Link:

[Video] Here's Why You Need to Tune In to the Samsung AI Forum 2021 - Samsung Newsroom

Posted in Ai | Comments Off on [Video] Here’s Why You Need to Tune In to the Samsung AI Forum 2021 – Samsung Newsroom

University of California to publish database of how it uses AI – EdScoop

Posted: at 10:29 pm

The University of California announced plans Monday to launch a public database and assess how the system is using artificial-intelligence based technologies.

Administrators said theyll follow four recommendations from a new Responsible Artificial Intelligence report on risks and opportunities in academics, health, human resources and policing. The report sheds light on how the 10-campus, 250,000 student system currently uses AI and includes recommendations on incorporating eight guiding ethical principles for UCs use of AI in its services, such as procurement, and monitoring the impact of automated decision-making, facial recognition and chatbots. UC also plans to establish departmental AI councils, according to the report.

The report is among the first of its kind for colleges and universities, according to a statement UC emailed to EdScoop.

Because of UCs size and stature as a preeminent public research university as well as Californias third-largest employer, the principles and guidance from the report have the potential to positively inform the development and implementation of AI standards beyond university settings within the spheres of research, business, and government, the statement reads.

A working group group, convened by UC President Michael Drake in 2020, gathered information to better understand the AI landscape at UC. According to the systems statement, the group focused on where AI is most likely to affect individual rights in university settings.

In academics, the report broke recommendations down into how AI could affect admissions and financial aid, student success, mental health and grading and remote proctoring. The report recommended using AI-powered software to inform human decision-making or in areas where the technology could reach out to improve the student experience, like AI-powered chatbots.

But administrators need to be careful in using AI to automate decisions entirely, the report states, citing concerns about historical bias being reinforced through the software pulling from previous data particularly in admissions. Some UC campuses already use formulas to help sort through admissions, and AI could help clean data on those admissions and automatically calculate scores, but there must be human oversight to ensure equity, the report states.

This means that the computational model must be able to take into account difficult-to-quantify criteria such as valuing life experiences as part of a students capacity for resilience and persistence needed to complete college-level work, it reads. If the computational model does not accommodate criteria such as life experience, a human must remain in the loop on that part of the review.

Colleges and universities across the country are using AI in everyday processes, lightening workloads so human employees can focus on tasks that require human judgement or empathy.

But there are also widespread ethical concerns about AI reinforcing historical bias. Universities often lack a universal ethical approach when buying new technologies, according to Brandie Nonnecke, the founding director of UC Berkleys lab on technology use policy and one of the working group members.

Its good were setting up these processes now, Nonnecke said in the press release. Other entities have deployed AI and then realized that its producing discriminatory or less efficient outcomes. Were at a critical point where we can establish governance mechanisms that provide necessary scrutiny and oversight.

See the rest here:

University of California to publish database of how it uses AI - EdScoop

Posted in Ai | Comments Off on University of California to publish database of how it uses AI – EdScoop

Building the infrastructure needed to secure research mega-grants with AI tools – EdScoop

Posted: at 10:29 pm

Higher education institutions looking to expand their research capacity to include artificial intelligence capabilities are often hindered by their legacy IT infrastructure. However, according to a recent white paper, IT leaders can look to solutions that help them bring together their siloed infrastructure and expand their ability to ingest and analyze data with modern tools.

The white paper produced by World Wide Technology, Hewlett Packard Enterprises and NVIDIA discusses how these three industry leaders capabilities work uniquely together to access large data sets from the data center out to the edge.

Higher education research seeking to solve tomorrows problems today requires high-performance computing platforms that leverage artificial intelligence at the edge to provide the right information to the right people at the right place and the right time, says HPEs vice president for U.S. public sector, Susan Shapero.

And as the education sector leans more on AI and high-performance computing to launch new research initiatives, they can lean on partnerships that help them access accelerated computing platforms, adds Cheryl Martin, director of higher education and research at NVIDIA.

The paper explains that even though higher education IT departments may not possess the bandwidth or experience necessary to provide the select, deploy and integrate the solutions their research departments require, they can turn to trusted resource partners like WWT to reduce complexity and configure their infrastructure to build the best solution that leads to winning proposals.

Learn more about how higher education institutions can develop IT capacity to integrate modern research tools.

Join the WWT Higher Education Research Community today to learn from your peers about industry partnerships that deliver exponential performance gains.

This article was produced by EdScoop for, and sponsored by, NVIDIA, HPE and WWT.

Original post:

Building the infrastructure needed to secure research mega-grants with AI tools - EdScoop

Posted in Ai | Comments Off on Building the infrastructure needed to secure research mega-grants with AI tools – EdScoop

Facebook wants AI to find your keys and understand your conversations – The Conversation AU

Posted: at 10:29 pm

Facebook has announced a research project that aims to push the frontier of first-person perception, and in the process help you remember where your left your keys.

The Ego4D project provides a huge collection of first-person video and related data, plus a set of challenges for researchers to teach computers to understand the data and gather useful information from it.

In September, the social media giant launched a line of smart glasses called Ray-Ban Stories, which carry a digital camera and other features. Much like the Google Glass project, which met mixed reviews in 2013, this one has prompted complaints of privacy invasion.

The Ego4D project aims to develop software that will make smart glasses far more useful, but may in the process enable far greater breaches of privacy.

Read more: Ray-Ban Stories let you wear Facebook on your face. But why would you want to?

Facebook describes the heart of the project as

a massive-scale, egocentric dataset and benchmark suite collected across 74 worldwide locations and nine countries, with over 3,025 hours of daily-life activity video.

The Ego in Ego4D means egocentric (or first-person video), while 4D stands for the three dimensions of space plus one more: time. In essence, Ego4D seeks to combine photos, video, geographical information and other data to build a model of the users world.

There are two components: a large dataset of first-person photos and videos, and a benchmark suite consisting of five challenging tasks that can be used to compare different AI models or algorithms with each other. These benchmarks involve analysing first-person video to remember past events, create diary entries, understand interactions with objects and people, and forecast future events.

The dataset includes more than 3,000 hours of first-person video from 855 participants going about everyday tasks, captured with a variety of devices including GoPro cameras and augmented reality (AR) glasses. The videos cover activities at home, in the workplace, and hundreds of social settings.

Although this is not the first such video dataset to be introduced to the research community, it is 20 times larger than publicly available datasets. It includes video, audio, 3D mesh scans of the environment, eye gaze, stereo, and synchronized multi-camera views of the same event.

Most of the recorded footage is unscripted or in the wild. The data is also quite diverse as it was collected from 74 locations across nine countries, and those capturing the data have various backgrounds, ages and genders.

Commonly, computer vision models are trained and tested on annotated images and videos for a specific task. Facebook argues that current AI datasets and models represent a third-person or a spectator view, resulting in limited visual perception. Understanding first-person video will help design robots that better engage with their surroundings.

Furthermore, Facebook argues egocentric vision can potentially transform how we use virtual and augmented reality devices such as glasses and headsets. If we can develop AI models that understand the world from a first-person viewpoint, just like humans do, VR and AR devices may become as valuable as our smartphones.

Facebook has also developed five benchmark challenges as part of the Ego4D project. The challenges aim to build better understanding of video materials to develop useful AI assistants. The benchmarks focus on understanding first person perception. The benchmarks are described as follows:

Episodic memory (what happened when?): for example, figuring out from first-person video where you left your keys

Hand-object manipulation (what am I doing and how?): this aims to better understand and teach human actions, such as giving instructions on how to play the drums

Audio-visual conversation (who said what and when?): this includes keeping track of and summarising conversations, meetings or classes

Social interactions (who is interacting with whom?): this is about identifying people and their actions, with a goal of doing things like helping you hear a person better if theyre talking to you

Forecasting activities (what am I likely to do next?): this aims to anticipate your intentions and offer advice, like pointing out youve already added salt to a recipe if you look like youre about to add some more.

Obviously there are significant concerns regarding privacy. If this technology is paired with smart glasses constantly recording and analysing the environment, the result could be constant tracking and logging (via facial recognition) of people moving around in public.

Read more: Face masks and facial recognition will both be common in the future. How will they co-exist?

While the above may sound dramatic, similar technology has already been trialled in China, and the potential dangers have been explored by journalists.

Facebook says it will maintain high ethical and privacy standards for the data gathered for the project, including consent of participants, independent reviews, and de-identifying data where possible.

As such, Facebook says the data was captured in a controlled environment with informed consent, and in public spaces faces and other PII [personally identifing information] are blurred.

But despite these reassurances (and noting this is only a trial), there are concerns over the future of smart-glasses technology coupled with the power of a social media giant whose intentions have not always been aligned to their users.

Read more: Artificial intelligence in Australia needs to get ethical, so we have a plan

The ImageNet dataset, a huge collection of tagged images, has helped computers learn to analyse and describe images over the past decade or more. Will Ego4D do the same for first-person video?

We may get an idea next year. Facebook has invited the research community to participate in the Ego4D competition in June 2022, and pit their algorithms against the benchmark challenges to see if we can find those keys at last.

See original here:

Facebook wants AI to find your keys and understand your conversations - The Conversation AU

Posted in Ai | Comments Off on Facebook wants AI to find your keys and understand your conversations – The Conversation AU

Virginia Tech researchers garner NSF grant to connect AI with urban planning to improve decision making and service delivery – Virginia Tech Daily

Posted: at 10:29 pm

Tom Sanchez, professor of urban affairs and planning, and Chris North, professor of computer science and associate director of theSanghani Center for Artificial Intelligence and Data Analytics,have been awarded a planning grant from the National Science FoundationsSmart and Connected Communitiesprogram.

The program is committed toaccelerating creation of scientific and engineering foundations that will enable smart and connected communities to bring about new levels of economic opportunity and growth, safety and security, health and wellness, accessibility and inclusivity, and overall quality of life.

Urban planning anticipates and guides the future physical and social conditions of communities to improve quality of life all with a heavy reliance on increasingly large and varied datasets, saidSanchez, who serves as principal investigator for the project. In fact, cities have become primary sites of data collection and algorithm deployment, but the professional field of urban planning lacks a comprehensive evaluation of how artificial intelligence can and should be used to improve analytical processes. Our project will address that question.

North, a co-principal investigator, will lend his expertise in computer science and interactive artificial intelligence to apply new technologies to generate more and better data that can help improvedecision making and service delivery, and increase efficiency.

We will apply AI to the future of smart and connected communities, focusing on data and analytical tools that enable human stakeholders to interact with AI algorithms during plan making and municipal decision making, saidNorth. A major goal of the human-AI interaction is to help expose and reduce potential hidden racial biases, digital divides, and infringements on privacy.

In addition to North, Sanchezs project team includesTheo Lim,assistant professor of urban affairs and planning;Alec Smith, professor of behavioral economics, experimental economics, and neuroeconomics; and Trey Gordner, a masters degree student in urban and regional planning who is alsopursuing the multidisciplinary National Foundation-sponsoredUrban Computing certificate, administered through the Sanghani Center.

Sanchez said inspiration for the project came from the UrbComp program, whichtrains students in the latest methods in analyzing massive datasets tostudy key issues concerning urban populations.

The American Planning Association, with about 40,000 members, will help the team connect with professional planners around the country. Arlington County Planning is partnering with the research team as a specific case study todetermine which operations have the highest likelihood of being assisted by AI technologies and which tasks include risks of unintended consequences that need to be addressed with caution. These include county-level responsibilities for comprehensive planning, land use, capital improvements, environment, parks, transportation, and utilities.

As we develop creative solutions to urban planning processes that have relied on traditional, analog approaches, we anticipate detecting synergies between public and private sectors based on widespread adoption of AI technologies, said North. Our hope is that the results of this research will catalyze AI startup activity in the urban planning field.

Because the project is focused on public planning, there is an expectation that innovations in planning will involve public awareness and input, Sanchez said. We believe we may also be able to shed some light on the broader impacts of automation in urban life, such as workforce.

In addition to specific contributions in the areas of research discovery and advances in practice, the project will expand education in the urban planning field through the development of case study materials suitable for coursework and training.

The duration of the $150,000 Smart and Connected Communities planning grant is one year and can be used to prepare NSF multiyear, multimillion-dollar grant proposals. The project has received additional funding from the 2021-22Institute for Society, Culture and Environment Scholarsprogram.

Read the original:

Virginia Tech researchers garner NSF grant to connect AI with urban planning to improve decision making and service delivery - Virginia Tech Daily

Posted in Ai | Comments Off on Virginia Tech researchers garner NSF grant to connect AI with urban planning to improve decision making and service delivery – Virginia Tech Daily

Why A.I. Is About To Trigger The Next Great Medical Breakthrough – Yahoo Finance

Posted: at 10:29 pm

The healthcare industry is one of the fastest-growing in the world, but today, it's seeing a major disruption.

With everything from work meetings to family gatherings going digital since 2020, this mammoth $12 trillion industry is joining the new tech renaissance to deliver a much-needed overhaul.

Thats why trillion-dollar Big Tech companies like Google, Amazon, and Apple are all trying to make a move into healthcare.

Now, the way people make decisions about their health is beginning to look drastically different from how it did 24 months ago.

The Wall Street Journal says, "Tech advances put the annual doctor visit on the critical list."

And Forbes says, "Artificial intelligence offers an unprecedented opportunity to... reshape the practice of healthcare."

The shift towards a new Healthcare 2.0 could soon be the biggest disruption to a trillion-dollar industry since Amazon took over retail or Netflix changed the face of entertainment.

And while Big Tech is doing their best to make a move into this industry

Treatment.com International Inc. (CSE: TRUE; OTC: TREIF) holds a key advantage that gives them a huge leg up on Silicon Valleys finest and has a distinctly North American heritage, unlike many competitors

Here are 5 reasons why you should pay close attention to Treatment.com:

1 - Billions of People are Embracing Healthcare 2.0

Over 1 billion people now use Google to research health symptoms each and every day, as Google has tried its best to leverage this into a new arm of the company.

Amazon jumped in with Amazon Care, offering up live chat or video care with clinicians when you have a problem.

And with Apples iOS15 rolling out, theyve made it possible to share data from your iPhones Health app directly with doctors through electronic medical records.

There's a staggering number of people taking their health into their own hands before they ever make it to the doctor's office.

But many have serious reservations about trusting Big Tech with their most sensitive health data given their history of privacy issues. Built by doctors for doctors, this tri-level enterprise software presents a unique big tech platform to healthcare with The Global Library of Medicine, known as (GLM).

Story continues

Treatment.com's big tech platform technology, on the other hand, is already being used to train and test medical students at the University of Minnesota Medical School, a top 10 med school in the United States.

Now, their new app, CARA, plans to take that same powerful technology and deliver it to your cell phone.

CARA takes a unique approach that personalizes this data to give you the most accurate health information possible.

CARA

Using the latest AI technology and input from top physicians across the globe, CARA has been trained to think like a doctor.

It considers everything from your age and gender to your medical history and unique risk factors.

Now their new mobile app, which puts the power of great doctors at your fingertips, is expected to be unveiled across North America this fall.

With over 1 billion people taking their healthcare into their own hands, the CARA app could make a huge splash as it appears in app stores in the coming months.

2 - Billions of Dollars Changing Hands in the Industry

As technology has started playing a bigger and bigger role in healthcare, the amount of money behind it is reaching a fever pitch.

Last year, healthcare company Olive acquired Verata Health for $120 million.

Medtronic paid $158 million to acquire AI-driven tech company Medicrea.

And Microsoft made headlines when they acquired health tech company Nuance Communications for a whopping $19.7 billion.

That's approximately 125x the size of Treatment.com as it stands today.

With millions - and even billions - of dollars changing hands in the industry, these small healthcare tech companies are precious gems just waiting to be scooped up.

And Treatment.com's (CSE: TRUE; OTC: TREIF) potential to drive valuable revenue from multiple angles makes it potentially even more appealing to the big players in the industry.

First, theyll charge a flat monthly fee comparable to a Netflix subscription for their premium app, where you can get access to services like telemedicine, prescription, and other referrals.

Second, as Treatment.coms technology is already being used in a top 10 medical school to train med students, they also plan to license the technology to universities for a monthly fee as well. And the scalable plug-and-play GLM platform has many more modules planned for release in the future. Treatment Mobile, Cara is the tip of the iceberg.

Plus, it's been tested in clinics to help streamline appointments and help doctors spend more time with their patients and less time on paperwork.

Since this could help drive up revenue in clinics, it's well worth paying a monthly licensing fee for this tech in this setting as well.

Finally, if they're able to catch the eye of insurance companies in the future, Treatment.com could be sitting on a cash cow with their new app.

How much do you think it would be worth for insurance companies to help potentially cut down on healthcare costs, getting ahead of major medical problems or cutting out unnecessary doctor's visits?

It's safe to say that given the potential benefits for everyone - consumers, physicians, and insurance companies alike... Treatment.com could be a prime takeover target in the new age of Healthcare.

3 - Built to Get Smarter All The Time

While Treatment.coms (CSE: TRUE; OTC: TREIF) technology is incredibly complex, the way it works is simple.

Users can enter their symptoms, and CARA will provide the most likely diagnoses, preventative measures, and potential treatments.

CARA

Plus, the app helps you track symptoms over time and manage issues for everyone in the family on one account.

It can even integrate with popular wearables like the Apple Watch and Fitbit which have soared in popularity...

Helping provide monitoring for important health data by integrating data in real-time.

Treatment.com's AI engine pinpoints the most likely diagnoses by tapping into the most advanced medical database in the world, the Global Library of Medicine.

It's currently able to offer around 800 diagnoses.

But soon, that could rise to all 1,200 primary care diagnoses and eventually all 8,000 to 10,000 known diseases.

In the past, doctors have had to wait years between editions of their medical textbooks as science continued to progress further every day.

The GLM is a platform like Apples iOS or Microsoft Windows thats continuously being updated with the most cutting-edge medical data.

And because the database is constantly being updated and the machine learning technology helps refine the algorithms...

Treatment.com's technology is getting smarter and more accurate all the time.

It's all because some of the smartest physicians and data scientists in the world have been working on this project for years, set to unveil this fall.

4 - World-Class Team With a Billion Dollar Track Record

Over the last 5 years, Treatment.coms (CSE: TRUE; OTC: TREIF) global team of top doctors have been working tirelessly to build their technology to think like a doctor.

They've come together with a team of top AI scientists, mathematicians, and PhDs in statistics to bring the technology to life.

And one look at their leadership team is enough to see the kind of billion-dollar potential that could lie ahead for Treatment.com.

Their CMO, Dr. Kevin Peterson, is an internationally-known researcher and tenured professor at a top medical school.

He's been in the business of training others in the medical field at the highest levels for over 35 years.

With that experience, there's almost nobody more qualified to train their AI technology how doctors truly think.

Pair that experience with the business credentials of CEO, John Fraser, and the business potential becomes even more exciting.

Fraser led another healthcare tech company, Ability Networks, until it was acquired by Inovalon in 2018 for a massive $1.2 billion.

Now he's hoping to follow that same blueprint with some of the world's top doctors where the potential user base could be well over a billion people.

Even if they manage to get a small sliver of that to download the CARA app, the potential profits for Treatment.com could be mind-boggling.

5 - Timing is Everything

We've already seen how Big Tech companies are piling into the healthcare industry at breakneck speed

Small companies are raking in millions or more when acquired by larger healthcare companies...

And Treatment.com's leadership has already proven their chops with a billion-dollar track record.

Now their CARA app could have patients, doctors, and insurance companies chomping at the bit to get their hands on it, as over 1 billion people are embracing the new face of healthcare.

And it's all set to begin this fall as Treatment.com (CSE: TRUE; OTC: TREIF) rolls out the app across North America.

They just teamed up with a leader in mobile apps, MentorMate, in the US to help scale the app and ensure its the most up-to-date on the market.

MentorMate has over 700 developers with 1,400 completed projects under its belts, and now its set to help make CARAs launch a massive success.

But the app could spread far beyond just North America over time.

Treatment.com has staff in Europe, Africa, and Singapore as well.

If all goes well, they could play a major role in disrupting the massive $12 trillion healthcare industry worldwide.

And as it's trading for under $3, it's an exciting time to watch Treatment.com as they're just weeks away from launching this new app across North America.

Other companies looking to capitalize on the healthcare boom:

Nuance Communications Inc. (NASDAQ:NUAN) is a leading provider of voice and language solutions for businesses and consumers around the world. The companys expertise in understanding natural language enables its customers to create more engaging customer experiences, provide support more efficiently, develop new products faster, and unlock insights from data to make critical decisions. Nuance's technologies are used by many Fortune 500 companies including BMW Group, Coca-Cola Enterprises Inc., McDonald's Corporation, Samsung Electronics Co., Ltd., Sony Computer Entertainment America LLC, and Virgin Atlantic Airways Limited as well as hundreds of other organizations in various industries such as automotive retailing, banking and financial services, healthcare providers and manufacturers.

Nuance Communications Inc.s healthcare solutions are award-winning. From ambient clinical intelligence and patient engagement solutions to its documentation capturing solutions, Nuances Dragon Ambient eXperience helps healthcare providers deliver better patient services with AI-powered, omnichannel technology.

In a blog post from Nuance Communications, the company highlights that Nearly 70% of healthcare executives plan to invest in AI-powered technologies as they seek new ways to solve healthcares toughest challenges. These planned investments clearly illustrate how a growing number of organizations are embracing AI-enabled technologies as a strategic asset.

Irhythm Technologies Inc (NASDAQ:IRTC) is a company that has developed and patented an innovative medical device to detect arrhythmia (abnormal heart rhythms). The Irrhythm team has extensive experience in designing, building, and testing electronic devices for medical purposes. They have validated the effectiveness of their technology through multiple clinical studies conducted by independent investigators at leading hospitals worldwide.

The Irrhythm device is designed to be used as a diagnostic tool during an arrhythmia event and can also be worn continuously as a preventative measure against sudden cardiac death. And when it comes to cardiac-related illnesses, everyone knows that every second counts!

In a recent release, IRhythm Technologies published results of its mHealth Screening to Prevent Strokes study, finding that Zio by IRhythm led to a 10x increase in the detection of AF versus patients receiving standard clinical care. Steven Steinhubl, MD, Director of Digital Medicine at Scripps Research Translational Institute and principal investigator of the study noted the importance of these findings, stating, A significant portion of those with AF have no symptoms and arent aware that they have it, adding, Long-term, continuous monitoring is helping in the shift to more preventative and proactive treatment and care.

International Business Machines Corporation (NYSE:IBM) better known as IBM, is a multinational technology company headquartered in Armonk, New York. It was founded in 1911 by Charles Ranlett Flint and Thomas J. Watson Sr., one of the leading businessmen of the 20th century. The company specializes in developing hardware and software products for data processing, storage, networking systems and other computer-related technologies.

International Business Machines mainframe computers are widely used around the world with many large corporations relying on them to process their transactions or crunch their numbers. These machines have been instrumental to running some of the largest companies over the last half-century including General Electric Company (GE), Walmart Inc., Ford Motor Company, ExxonMobil Corporation (XOM) among others.

International Business Machines is another tech veteran at the forefront of healthcare innovation. IBMs Watson Health is a platform that includes artificial intelligence, blockchain technology data, and analytics to support clients growing digital needs. Watson Health provides solutions to clinicians, governments, and even researchers.

Intel Corporation (NASDAQ:INTC) is an American multinational semiconductor chip maker company that develops computer chips for data processing, communications, and storage. Intel's headquarters are in Santa Clara, California with offices around the world. Founded by Gordon Moore and Robert Noyce, Intel was incorporated on July 18, 1968. The company has made several contributions to technology including the invention of the microprocessor (1971), the development of the first commercial silicon transistor (1954) and research into quantum computing.Intel Corporation is no stranger to new tech. And its certainly doing its part in the healthcare sector, as well. In particular, in the advancement of artificial intelligence in fields such as medical imaging, analytics, and lab and life sciences.

Recently, Intel Corporation embarked on a new joint venture with GE in order to pursue the creation of a new company focused on telehealth and independent living in order to tackle the burden of chronic disease and age-related conditions. Intel President and CEO Paul Otellini explained, "New models of care delivery are required to address some of the largest issues facing society today, including our aging population, increasing healthcare costs and a large number of people living with chronic conditions," adding "We must rethink models of care that go beyond hospital and clinic visits, to home and community-based care models that allow for prevention, early detection, behavior change, and social support. The creation of this new company is aimed at accelerating just that."

Blackberry Limited (NYSE:BB, TSX:BB) is one of Canadas most exciting tech plays. While it has pivoted away from its iconic cell phones of yesteryear, it is still very much involved in pushing tech, and by extension all of mankind, further. Its even building a global digitized healthcare database leveraging blockchain technology. This could be a game-changer for how health data is managed and distributed. But thats just one facet of its big-picture push. From its high-profile partnerships with the likes of Amazon and more to its key posturing in the Internet of Things explosion, BlackBerry is tackling the industry from all fronts and will be an important player for years to come.

BlackBerry also launched a new research and development arm called BlackBerry Advanced Technology Labs. Todays cybersecurity industry is rapidly advancing and BlackBerry Labs will operate as its own business unit solely focused on innovating and developing the technologies of tomorrow that will be necessary for our sustained competitive success, from A to Z; Artificial Intelligence to Zero-Trust environments, Charles Eagan, BlackBerry CTO explained.

AEterna Zentaris Inc. (TSX:AEZS) is a major biopharmaceutical up and comer. The company has seen steady growth and an array of new developments over the recent years. With a focus on oncology, endocrinology, and women's health solutions, AEterna has created a variety of new products, including Macrilen, the first and only FDA-approved oral test for the diagnosis of Adult Growth Hormone Deficiency.

Recently, AEterna received European approval to market Macrillen which has pushed its value even higher. Dr. Christian Strasburger, the Head of Clinical Endocrinology at Charit Unversitaetsmedizin Berlin and the principal investigator for macimorelin explained, Clinical studies have demonstrated that macimorelin is safer and much simpler to administer than the current methods of testing for insulin-induced hypoglycemia, and is well-tolerated by patients and reliable in diagnosing the condition.

Aptose Biosciences Inc. (TSX:APS) is a biotech company specializing in personalized therapies to address Canadas unmet oncology needs. The company uses genetic and epigenetic profiles to gain insights into certain cancers and patient populations in order to develop new treatments within the space.

Aptose has an exclusive partnership with Ohm Oncology to develop, manufacture, and commercialize APL-581 in order to treat hematologic malignancies and related molecules.

The Hexo Corporation (TSX:HEXO), as previously mentioned, made major waves with its partnership with Molson Coors to develop cannabis beverages. In Hexos fourth-quarter press release, the company shared some optimistic news regarding Truss progress, with Sebastien St-Louis, Hexo CEO and co-founder, explaining, We are commanding significant market share in Quebec and this year we made major strides by launching Truss cannabis-infused beverages in Canada in addition to our initial foray into the U.S. with Molson Coors, a world-class partner,

The world is currently in the midst of a mental health crisis. Everything that could possibly go wrong, has. And to make matters worse, millions, if not tens of millions of people are stuck in isolation. It's never been more important to support the field of mental health. And the FDA seems to agree. Not only have they fast-tracked psilocybin, but they've also approved other exciting new approaches to tackling mental health issues.

Toronto-based Field Trip Health (CSE:FTRP) is taking a three-pronged approach in its work in the transformative psychedelic medicine sector. Not only are they involved in drug development, but theyre also involved in manufacturing and run a number of treatment clinics.

Field Trip has hit the ground running. With clinics currently operating in Toronto, Los Angeles, and New York, they have plans to ramp up to 75 clinics providing psychotherapy along with psychedelic treatments. As one of the frontrunners in this exciting new industry, investors are keeping a close eye on Field Trip.

Visit link:

Why A.I. Is About To Trigger The Next Great Medical Breakthrough - Yahoo Finance

Posted in Ai | Comments Off on Why A.I. Is About To Trigger The Next Great Medical Breakthrough – Yahoo Finance

Terrifyingly, Facebook wants its AI to be your eyes and ears – The Next Web

Posted: at 10:29 pm

Facebook has announced a research project that aims to push the frontier of first-person perception, and in the process help you remember where you left your keys.

The Ego4D project provides a huge collection of first-person video and related data, plus a set of challenges for researchers to teach computers to understand the data and gather useful information from it.

In September, the social media giant launched a line of smart glasses called Ray-Ban Stories, which carry a digital camera and other features. Much like the Google Glass project, which met mixed reviews in 2013, this one has prompted complaints of privacy invasion.

The Ego4D project aims to develop software that will make smart glasses far more useful, but may in the process enable far greater breaches of privacy.

Facebook describes the heart of the project as a massive-scale, egocentric dataset and benchmark suite collected across 74 worldwide locations and nine countries, with over 3,025 hours of daily-life activity video.

The Ego in Ego4D means egocentric (or first-person video), while 4D stands for the three dimensions of space plus one more: time. In essence, Ego4D seeks to combine photos, video, geographical information and other data to build a model of the users world.

There are two components: a large dataset of first-person photos and videos, and a benchmark suite consisting of five challenging tasks that can be used to compare different AI models or algorithms with each other. These benchmarks involve analyzing first-person videos to remember past events, create diary entries, understand interactions with objects and people, and forecast future events.

The dataset includes more than 3,000 hours of first-person video from 855 participants going about everyday tasks, captured with a variety of devices including GoPro cameras and augmented reality (AR) glasses. The videos cover activities at home, in the workplace, and hundreds of social settings.

Although this is not the first such video dataset to be introduced to the research community, it is 20 times larger than publicly available datasets. It includes video, audio, 3D mesh scans of the environment, eye gaze, stereo, and synchronized multi-camera views of the same event.

Most of the recorded footage is unscripted or in the wild. The data is also quite diverse as it was collected from 74 locations across nine countries, and those capturing the data have various backgrounds, ages and genders.

Commonly, computer vision models are trained and tested on annotated images and videos for a specific task. Facebook argues that current AI datasets and models represent a third-person or a spectator view, resulting in limited visual perception. Understanding first-person video will help design robots that better engage with their surroundings.

Future robotic agents will benefit from a better understanding of their environment. Wikimedia

Furthermore, Facebook argues egocentric vision can potentially transform how we use virtual and augmented reality devices such as glasses and headsets. If we can develop AI models that understand the world from a first-person viewpoint, just like humans do, VR and AR devices may become as valuable as our smartphones.

Facebook has also developed five benchmark challenges as part of the Ego4D project. The challenges aim to build a better understanding of video materials to develop useful AI assistants. The benchmarks focus on understanding first-person perception. The benchmarks are described as follows:

Obviously, there are significant privacy concerns. If this technology is paired with smart glasses constantly recording and analyzing the environment, the result could be constant tracking and logging (via facial recognition) of people moving around in public.

While the above may sound dramatic, similar technology has already been trialed in China, and the potential dangers have been explored by journalists.

Facebook says it will maintain high ethical and privacy standards for the data gathered for the project, including consent of participants, independent reviews, and de-identifying data where possible.

As such, Facebook says the data was captured in a controlled environment with informed consent, and in public spaces faces and other PII [personally identifying information] are blurred.

But despite these reassurances (and noting this is only a trial), there are concerns over the future of smart-glasses technology coupled with the power of a social media giant whose intentions have not always been aligned to their users.

The ImageNet dataset, a huge collection of tagged images, has helped computers learn to analyze and describe images over the past decade or more. Will Ego4D do the same for first-person video?

We may get an idea next year. Facebook has invited the research community to participate in the Ego4D competition in June 2022, and pit their algorithms against the benchmark challenges to see if we can find those keys at last.

Article by Jumana Abu-Khalaf, Research Fellow in Computing and Security, Edith Cowan University and Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

See the original post here:

Terrifyingly, Facebook wants its AI to be your eyes and ears - The Next Web

Posted in Ai | Comments Off on Terrifyingly, Facebook wants its AI to be your eyes and ears – The Next Web

Coverage of FDAs AI/ML Medical Devices Workshop – Part 3: A Summary of the Panel Discussions – JD Supra

Posted: at 10:29 pm

In the weeks leading up to FDAs October 14, 2021 Transparency of AI/ML Enabled Medical Devices Workshop (Workshop), we took a brief look at the history of FDAs regulation of medical device software and the agencys more recent efforts in regulating digital health. In this post, we will provide an overview of the topics discussed at the Workshop and our impressions of the agencys likely next steps.

Stakeholders participating in the Workshop discussed what form(s) product labeling might take, and whether something analogous to a nutrition label would be helpful in other words, a short-form, uniform label formatted to present essential, easily understandable information about the AI/ML-enabled devices accuracy, fairness, generalization, transparency, and robustness, among other things). Patients discussed the need to have a brief overview with critical information, which might look more like a nutrition label, but they also mentioned that more information should be made readily available to patients and their providers, similar to a more lengthy package insert for prescription drugs. Stakeholders also discussed the idea that information about AI/ML-enabled devices provided to a health care provider and a patient should be different and appropriately tailored and that the opportunity to access real-time data about the technologys accuracy and performance could significantly enhance transparency.

Topics of significant discussion amongst the stakeholders during the day-long event were concerns about data quality, bias, and health equity. Bias is a concern because AI/ML is so heavily data-driven and can result in inherent biases depending on the data set that is used to develop the technology. As a result, the stakeholders discussed the importance of data sets being representative of the intended patient population, and they agreed that health equity is a key goal to build into those data collection efforts as well. The group discussed the need for evaluations of sex-specific data, race and ethnicity data, and age, disabilities, and comorbidities in clinical testing and human factors analyses in order to improve consistency and transparency regarding safety, efficacy and usability for various groups.

Another topic of interest to Workshop participants was how to provide transparency when companies are using proprietary software, and whether insisting on open source coding so patients and health care providers can see the algorithms used would make a difference. In general, there seemed to be more interest in understanding how the algorithm was trained, how it works, and how accurate it is within specific patient populations, rather than having access to the actual algorithm. However, it seems clear that open source coding is something that FDA is considering with respect to transparency.

Stakeholders also discussed the need for design controls and the need for understanding users and incorporating human factors during the design and validation process for AI/ML-enabled medical devices. They debated the need for an intuitive user interface, which can benefit from early user involvement in the design process, as well as the potential for using predetermined change control plans. Such a predetermined change control plan could include the rationale for the update, a description of change in product claims, and a description of any changes to the software and its instructions for use. Finally, the stakeholders acknowledged that machine learning will be more challenging and likely require more of an ongoing validation and reporting process compared to artificial intelligence with locked algorithms.

Patients and patient advocates continue to voice concerns about the potential for clinical and personal data to be used against patients. With the vast amount of information available about individuals from various sources, there are legitimate concerns about the use and disclosure of de-identified data sets due to the potential for re-identification. There are also legitimate concerns about the potential harm to patients from false positives and incidental findings, for example, misdiagnoses and denial of insurance coverage. Some also question whether patients are given sufficient information about what laws protect their personal and health information and what precautions will be used to protect their privacy before they agree to use the technology. Finally, there seems to be some consensus that some level of patient consent should be required before an AI/ML-enabled device may collect a patients data, and there is some question as to whether clicking I accept on a data privacy policy should be sufficient.

In closing, the recent Workshop highlighted an overarching theme that transparency means communicating appropriate information throughout the product lifecycle at the right times and taking into account different contextual factors, and FDA is seeking comments to elucidate how to accomplish such communication effectively. In particular, FDA seems to be interested in gathering perspectives from stakeholders on how to provide patient-centered transparency with the goal of communicating the risks and benefits of the technology, how to effectively and safely use it, and any information necessary for monitoring the devices performance. Further, FDA is interested in methods to address health equity issues and to communicate the right information at the right time. Specifically, views on information to include on product labels and whether use of expanded media such as videos would be helpful to the agency. Additionally, FDA is seeking input on how to incorporate user participation in design, and how to best tailor its regulatory approach to AI/ML.

We encourage all interested parties, as FDA officials did at the close of the Workshop, to submit comments on the topic of AI/ML-enabled medical device transparency to the docket (FDA-2019-N-1185) by November 15, 2021.

[View source.]

Read the original:

Coverage of FDAs AI/ML Medical Devices Workshop - Part 3: A Summary of the Panel Discussions - JD Supra

Posted in Ai | Comments Off on Coverage of FDAs AI/ML Medical Devices Workshop – Part 3: A Summary of the Panel Discussions – JD Supra

Page 93«..1020..92939495..100110..»