Astera Labs to Host Mayor of Burnaby at Grand Opening Of New Vancouver Design Center and Lab Dedicated to Purpose-Built Connectivity Solutions for…

--(BUSINESS WIRE)--Astera Labs Inc. :

WHEN:

Wednesday, September 21, 2022, from 9:30 a.m.-11:30 a.m. PDT

WHERE:

Astera Labs Vancouver4370 Dominion StreetBurnaby, BC V5G 4L7Canada

WHO:

WHAT:

Astera Labs welcomes the Mayor of Burnaby and the Burnaby Board of Trade President and CEO to celebrate the grand opening of its new state-of-the-art design center and lab in the Greater Vancouver Area.

Astera Labs Vancouver will support the companys development of cutting-edge interconnect technologies for Artificial Intelligence and Machine Learning architectures in the Cloud. The rapidly growing semiconductor company chose the Vancouver area to tap into the regions rich technology talent base to drive product development, customer support and marketing. The Vancouver location increases the companys operations in Canada, which already includes the new Research and Development Design Center in Toronto, and adds to its global footprint with headquarters in Santa Clara, California and offices around the globe.

Astera Labs is actively hiring across multiple engineering and marketing disciplines to support end-to-end product and application development and overall go-to-market operations. Open positions can be found at http://www.AsteraLabs.com/Careers/.

The ribbon cutting and photo opportunity with Burnaby Officials and Astera Labs Executives will be held outdoors. Below is an overview of the event agenda:

Event Schedule

Formal Remarks

9:30 a.m. 10:00 a.m. PDT

Ribbon Cutting / Photo Op / Media Q&A

10:00 a.m. 10:30 a.m. PDT

Indoor Reception

10:30 a.m. 11:30 a.m. PDT

For onsite assistance, contact Dave Nelson at (604) 418-9930.

About Astera Labs

Astera Labs Inc. is a leader in purpose-built data and memory connectivity solutions to remove performance bottlenecks throughout the data center. With locations worldwide, the companys silicon, software, and system-level connectivity solutions help realize the vision of Artificial Intelligence and Machine Learning in the Cloud through CXL, PCIe, and Ethernet technologies. For more information about Astera Labs including open positions, visit http://www.AsteraLabs.com.

See the rest here:
Astera Labs to Host Mayor of Burnaby at Grand Opening Of New Vancouver Design Center and Lab Dedicated to Purpose-Built Connectivity Solutions for...

7 Machine Learning Portfolio Projects to Boost the Resume – KDnuggets

There is a high demand for machine learning engineer jobs, but the hiring process is tough to crack. Companies want to hire professionals with experience in dealing with various machine learning problems.

For a newbie or fresh graduate, there are only a few ways to showcase skills and experience. They can either get an internship, work on open source projects, volunteer in NGO projects, or work on portfolio projects.

In this post, we will be focusing on machine learning portfolio projects that will boost your resume and help you during the recruitment process. Working solo on the project also makes you better at problem-solving.

mRNA Degradation project is a complex regression problem. The challenge in this project is to predict degradation rates that can help scientists design more stable vaccines in the future.

The project is 2 years old, but you will learn a lot about solving regression problems using complex 3D data manipulation and deep learning GRU models. Furthermore, we will be predicting 5 targets: reactivity, deg_Mg_pH10, deg_Mg_50C, deg_pH10, deg_50C.

Automatic Image Captioning is the must-have project in your resume. You will learn about computer vision, CNN pre-trained models, and LSTM for natural language processing.

In the end, you will build the application on Streamlit or Gradio to showcase your results. The image caption generator will generate a simple text describing the image.

You can find multiple similar projects online and even create your deep learning architecture to predict captions in different languages.

The primary purpose of the portfolio project is to work on a unique problem. It can be the same model architecture but a different dataset. Working with various data types will improve your chance of getting hired.

Forecasting using Deep Learning is a popular project idea, and you will learn many things about time series data analysis, data handling, pre-processing, and neural networks for time-series problems.

The time series forecasting is not simple. You need to understand seasonality, holiday seasons, trends, and daily fluctuation. Most of the time, you dont even require neural networks, and simple linear regression can provide you with the best-performing model. But in the stock market, where the risk is high, even a one percent difference means millions of dollars in profit for the company.

Having a Reinforcement Learning project on your resume gives you an edge during the hiring process. The recruiter will assume that you are good at problem-solving and you are eager to expand your boundaries to learn about complex machine learning tasks.

In the Self-Driving car project, you will train the Proximal Policy Optimization (PPO) model in the OpenAI Gym environment (CarRacing-v0).

Before you start the project, you need to learn the fundamentals of Reinforcement Learning as it is quite different from other machine learning tasks. During the project, you will experiment with various types of models and methodologies to improve agent performance.

Conversational AI is a fun project. You will learn about Hugging Face Transformers, Facebook Blender Bot, handling conversational data, and creating chatbot interfaces (API or Web App).

Due to the huge library of datasets and pre-trained models available on Hugging Face, you can basically finetune the model on a new dataset. It can be Rick and Morty conversation, your favorite film character, or any celebrity that you love.

Apart from that you can improve the chatbot for your specific use case. In case of medical application. The chatbot needs technical knowledge and understands the patient's sentiment.

Automatic Speech Recognition is my favorite project ever. I have learned everything about transformers, handling audio data, and improving the model performance. It took me 2 months to understand the fundamentals and another two to create the architecture that will work on top of the Wave2Vec2 model.

You can improve the model performance by boosting Wav2Vec2 with n-grams and text pre-processing. I have even pre-processed the audio data to improve the sound quality.

The fun part is that you can fine-tune the Wav2Vec2 model on any type of language.

End-to-end machine learning project experience is a must. Without it, your chance of getting hired is pretty slim.

You will learn:

The main purpose of this project is not about building the best model or learning new deep learning architecture. The main goal is to familiarize the industry standards and techniques for building, deploying, and monitoring machine learning applications. You will learn a lot about development operations and how you can create a fully automated system.

After working on a few projects, I will highly recommend you create a profile on GitHub or any code-sharing site where you can share your project findings and documentation.

The principal purpose of working on a project is to improve your odds of getting hired. Showcasing the projects and presenting yourself in front of a potential recruiter is a skill.

So, after working on a project, start promoting it on social media, create a fun web app using Gradio or Streamlit, and write an engaging blog. Dont think about what people are going to say. Just keep working on a project and keep sharing. And I am sure in no time multiple recruiters will approach you for the job.

Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in Technology Management and a bachelor's degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

See the original post here:
7 Machine Learning Portfolio Projects to Boost the Resume - KDnuggets

What are the best courses to learn machine learning? – Rebellion Research

What are the best courses to learn machine learning?

Artificial Intelligence & Machine Learning

Software programs can increase their propensity to anticipate outcomes without being explicitly designed, thanks to artificial intelligence (AI) and machine learning (ML). Machine learning algorithms use previous data as input to anticipate new output values.

Machine learning courses online are a modern invention that has benefited many workplace and company processes and students daily lives. In this area of artificial intelligence (AI), statistical techniques are used to build smart computer systems that can pick up new information from readily available databases.

The fields of computer science known as artificial intelligence (AI) and machine learning (ML) concentrate on analyzing and interpreting patterns and structures in data to allow understanding, reasoning, and decision-making independent of human involvement. In the world of technology, everyone is utilizing academic services to help with online classes.

Now different colleges and universities are introducing the best machine learning courses online and on their campuses so students can use these new skills and upgrade their learning. In this article, you can go through different courses from different educational institutions.

The Stanford University course is the most popular online machine learning course. Its a great introduction to the field that anyone can take, whether you have no background in machine learning or are just looking for a refresher.

The course has been taught by Andrew Ng since 2012 and has over 200,000 students enrolled. There are also many other courses offered on Coursera which are focused on more specialized topics such as computer vision and natural language processing (NLP).

You may enroll in fundamental machine learning classes online at coursera.org, where you can build and train supervised machine learning models for prediction and binary classification problems, such as logistic regression and linear regression.

Machine Learning Foundations is a free online course taught by the same people who led the machine learning course at Stanford University. Its designed for students with no prior knowledge of machine learning and covers all the basics you need to understand what makes machine learning work.

The course starts with an introduction to how computers learn from data, then moves on to algorithms like supervised classification and unsupervised clustering (which are used in many real-world applications). You will also learn about topic modeling, deep neural networks (DNNs), feature selection algorithms like random forest classifiers, dimensionality reduction techniques such as principal components analysis (PCA), feature extraction using linear regression models, and this list goes on.

All this information should give you enough background information on which topics would be useful in your career as a data scientist or analyst. However, theres still more left out for machine learning. It might seem like an oversight considering theyre often more advanced than other approaches. Theres still hope since we live in an era where big data has begun generating mountains upon mountains worth analyzing, so why not use them?

This course is designed for students with machine learning backgrounds who want to learn the fundamentals of building successful systems. The course focuses on the practical aspects of implementing machine learning applications with Python, R, and Hadoop.

The course includes lectures and exercises to help you understand how machine learning works. Alongside this material, you will also receive access to an online homework assignment where you can practice what has been covered during lectures.

EIT Digital Master School is a one-year online program that combines the best of both worlds, offering the flexibility to study whenever you have time and an intuitive interface that makes it easy to understand whats going on. This course is also recommended for business professionals who want to learn machine learning but have little experience with it. It enables you to do machine learning courses online in an efficient way.

It is not the best option if you are looking for a career as a data scientist since this course is focused solely on machine learning techniques and not on building products or services using these methods (though there are plenty of opportunities outside of academia), This course is not recommended if your goal is getting into the industry after graduation.

Fast.ai is a popular online course offering students free and affordable courses. The course is taught by Jeremy Howard, who has been teaching machine learning for years. He knows exactly what it takes to master this field, so if you are looking for an instructor who can help you build a strong foundation in the basics of linear algebra and calculus, this might be the best choice for your needs.

In addition to being taught by one of the best instructors in this field (and therefore having access to some of the best resources), fast.ai also has many other benefits, like free. There are no hidden fees or charges like many other online courses. If anything goes wrong while using their services, they will fix it without question, which means less stress when trying something new. And finally, the community around them is amazing because they are so friendly towards others who want help learning how to get better at programming languages like Python.

It can seem overwhelming if you are interested in machine learning courses online and dont have a computer science background, still, there are plenty of courses available to you that will help you get started. Machine learning is a very broad field, and it can be difficult to know where to start. Many courses are available for students who want to learn about machine learning but arent sure where their path should begin.

Some courses are better suited for beginners than others; if youre starting with this subject, then you should consider taking one of the following:

If you have some experience with programming, especially algorithms, but not necessarily computer science or maths, then another option might become better suited for you:

Takeaway #1 While taking machine learning courses online, students should not expect to be able to create a functioning model after the first few hours of study. You will probably spend much more time getting to that point than you ever thought possible. These courses take more time and practice to get pro in it. Machine learning follows protocol and step-by-step learning. If you are taking these classes, you must be patient and do the best practice.

Takeaway #2 There is no substitute for experience, and lots of it! If you have never worked with any machine learning system before, dont expect to get up to speed quickly or without practice. The best course material mimics this reality and gives you enough material to understand the basics and then leaves it up to you as much as possible for you to keep practicing on your own until things become second nature, which means becoming familiar with the tools that are available and developing a critical eye which allows you to know what data is real and what data is false.

Artificial Intelligence & Machine Learning

The rest is here:
What are the best courses to learn machine learning? - Rebellion Research

The Increased Use Of Machine Learning And Artificial Intelligence Is Expected To Fuel The Digital Transformation Market As Per The Business Research…

LONDON, Sept. 14, 2022 (GLOBE NEWSWIRE) -- According to The Business Research Companys research report on the digital transformation market, the increasing adoption of machine learning and artificial intelligence is expected to drive the growth of the digital transformation market going forward. Digital transformation provides traditional businesses with solutions like cloud computing, big data & analytics, data management, and other advanced features such as artificial intelligence and machine learning, which help in the optimization of business operations, leading to reduced efforts in operations and increased efficiency. Thus, their usage increased in various sectors such as healthcare, banking, transportation, manufacturing, and others, increasing the demand in the digital transformation market.

For instance, according to the report published by Cloudmantra, an India-based technology services company, the usage of machine learning in the Indian manufacturing industry has increased manufacturing capacity by up to 20% while reducing material usage by 4% in 2021. It also gives manufacturers the ability to control Overall Equipment Effectiveness (OEE) at the plant level, increasing OEE performance from 65% to 85%. Furthermore, according to the MIT Technology Review Insights report in 2022, approximately 60% of manufacturers are using artificial intelligence to improve daily operations, design products, and plan their future operations. Therefore, the rising adoption of machine learning and AI drives the digital transformation market.

Request for a sample of the global digital transformation market report

The global digital transformation market size is expected to grow from $0.94 trillion in 2021 to $1.17 trillion in 2022 at a compound annual growth rate (CAGR) of 24.7%. The global digital transformation market share is expected to grow to $2.64 trillion in 2026 at a CAGR of 22.4%.

Technological advancement in digital solutions is gaining popularity among the digital transformation market trends. Major companies operating in the digital transformation market are focused on developing technologically advanced products to strengthen their market position. For instance, in April 2020, Oracle Corporation, a US-based computer technology corporation and software solutions provider, built a new cloud data storage service called GoldenGate, an oracles cloud infrastructure software that uses real-time data analytics for the analysis of data. Real-time data analysis provides a very quick analysis of data by using different logical and mathematical operations, which helps in understanding business requirements and implementing any decision instantly. GoldenGate provides clients with a highly automated and fully managed cloud service such as database replication, analyzing real-time data, and real-time data ingestion to the cloud, which will make daily business operations easy and analyzable.

Major players in the digital transformation market are Microsoft Corporation, IBM Corporation, Oracle Corporation, Google Inc., Cognizant, Accenture PLC, Dell EMC, Siemens AG, Hewlett-Packard Company, Adobe Systems Inc., Capgemini, Cognex Corporation, Deloitte, Marlabs Inc., Equinix Inc., PricewaterhouseCoopers, Apple Inc., Broadcom, CA Technologies, KELLTON TECH, International Business Machines Corporation, Hakuna Matata Solutions, ScienceSoft Inc., SumatoSoft, Space-O Technologies, HCL Technologies, and Tibco Software Inc.

The global digital transformation market analysis is segmented by technology into cloud computing, big data and analytics, artificial intelligence (AI), internet of things (IoT), blockchain; by deployment mode into cloud, on-premises; by organization size into large enterprises, small and medium-sized enterprises (SMEs); by end-user into BFSI, healthcare, telecom and IT, automotive, education, retail and consumer goods, media and entertainment manufacturing, government, others.

North America was the largest region in the digital transformation market in 2021. Asia-Pacific is expected to be the fastest-growing region in the global digital transformation market during the forecast period. The regions covered in the global digital transformation industry outlook are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, the Middle East, and Africa.

Digital Transformation Global Market Report 2022 Market Size, Trends, And Global Forecast 2022-2026 is one of a series of new reports from The Business Research Company that provide digital transformation market overviews, analyze and forecast market size and growth for the whole market, digital transformation market segments and geographies, digital transformation market trends, digital transformation market drivers, digital transformation market restraints, digital transformation market leading competitors revenues, profiles and market shares in over 1,000 industry reports, covering over 2,500 market segments and 60 geographies.

The report also gives in-depth analysis of the impact of COVID-19 on the market. The reports draw on 150,000 datasets, extensive secondary research, and exclusive insights from interviews with industry leaders. A highly experienced and expert team of analysts and modelers provides market analysis and forecasts. The reports identify top countries and segments for opportunities and strategies based on market trends and leading competitors approaches.

Not the market you are looking for? Check out some similar market intelligence reports:

Artificial Intelligence Global Market Report 2022 By Offering (Hardware, Software, Services), By Technology (Machine Learning, Natural Language Processing, Context-Aware Computing, Computer Vision, Others (Image Processing, Speech Recognition)), By End-User Industry (Healthcare, Automotive, Agriculture, Retail, Marketing, Telecommunication, Defense, Aerospace, Media & Entertainment) Market Size, Trends, And Global Forecast 2022-2026

Cloud Orchestration Global Market Report 2022 By Service Type (Cloud Service Automation, Training, Consulting, And Integration, Support And Maintenance), By Deployment Mode (Private, Public, Hybrid), By Organization Size (Small And Medium Enterprises (SMEs), Large Enterprises), By End-User (Healthcare And Life Sciences, Transportation And Logistics, Government And Defense, IT And Telecom, Retail, Manufacturing, Other End-Users) Market Size, Trends, And Global Forecast 2022-2026

Internet Of Things (IoT) Global Market Report 2022 By Platform (Device Management, Application Management, Network Management), By End Use Industry (BFSI, Retail, Government, Healthcare, Manufacturing, Transportation, IT & Telecom), By Application (Building And Home Automation, Smart Energy And Utilities, Smart Manufacturing, Connected Logistics, Smart Retail, Smart Mobility And Transportation) Market Size, Trends, And Global Forecast 2022-2026

Interested to know more about The Business Research Company?

The Business Research Company is a market intelligence firm that excels in company, market, and consumer research. Located globally it has specialist consultants in a wide range of industries including manufacturing, healthcare, financial services, chemicals, and technology.

The Worlds Most Comprehensive Database

The Business Research Companys flagship product, Global Market Model, is a market intelligence platform covering various macroeconomic indicators and metrics across 60 geographies and 27 industries. The Global Market Model covers multi-layered datasets which help its users assess supply-demand gaps.

More:
The Increased Use Of Machine Learning And Artificial Intelligence Is Expected To Fuel The Digital Transformation Market As Per The Business Research...

Explore and master machine learning and data science with this eight-course bundle – TechRepublic

With this eight-course certification training bundle, youll get to master machine learning and data science concepts. Grab it for $35 today.

Even in cutting-edge fields like machine learning, technology is constantly evolving and innovating. Thats why, if you want to future-proof your skills and make strides up the career ladder, its important to commit to learning todays most important technologies. Today, that means delving into machine learning and data science. You can take a deep dive into both in the Machine Learning & Data Science Certification Training Bundle.

This eight-course bundle is taught by Minerva Singh, a Cambridge University Ph.D. graduate who has extensive experience in data science. She has expertise in tools like R, QGIS, Python and more.

Across the courses, youll focus primarily on Python, R, and TensorFlow. Starting out, youll get a full introduction to Python Data Science, learn how to install TensorFlow and Keras, and begin covering the basics of syntax and TensorFlows graphing environment. From there, youll begin creating artificial neural networks and deep learning structures and explore statistical modeling in TensorFlow.

As you progress, youll get more focused training in Python Data Science, learn how to classify and cluster data in Python, and understand how to use statistics in machine learning. Finally, in a couple of deep dives into R programming, youll learn commonly used techniques, visualization methods and deep learning techniques and how to apply them to real-life temporal data.

Youll also explore deep neural networks, convolution neural networks and recurrent neural networks. By the end of the courses, youll have a comprehensive understanding of some of the most important tools in machine learning and data science.

Get caught up to the future. Grab the Machine Learning & Data Science Certification Training Bundle for just $35 today.

Prices and availability are subject to change.

Excerpt from:
Explore and master machine learning and data science with this eight-course bundle - TechRepublic

Explainable machine learning analysis reveals sex and gender differences in the phenotypic and neurobiological markers of Cannabis Use Disorder |…

SAMSHA. Key Substance Use and Mental Health Indicators in the United States: Results from the 2018 National Survey on Drug Use and Health 82 (2018).

Hasin, D. S. et al. Prevalence of marijuana use disorders in the United States between 20012002 and 20122013. JAMA Psychiat. 72(12), 12351242 (2015).

Article Google Scholar

Chapman, C. et al. Evidence for sex convergence in prevalence of cannabis use: A systematic review and meta-regression. J. Stud. Alcohol Drugs. 78(3), 344352 (2017).

PubMed PubMed Central Article Google Scholar

Nia, A. B., Mann, C., Kaur, H. & Ranganathan, M. Cannabis use: Neurobiological, behavioral, and sex/gender considerations. Curr. Behav. Neurosci. Rep. 5(4), 271280 (2018).

PubMed PubMed Central Article Google Scholar

Substance Abuse and Mental Health Services Administration. Results from the 2006 National Survey on Drug Use and Health: National Findings 282 (2007).

Center for Behavioral Health Statistics and Quality. 2017 National Survey on Drug Use and Health: Detailed Tables 2871 (Substance Abuse and Mental Health Services Administration, 2017).

Google Scholar

Khan, S. S. et al. Gender differences in cannabis use disorders: Results from the national epidemiologic survey of alcohol and related conditions. Drug Alcohol Depend. 130, 101108 (2013).

PubMed Article Google Scholar

Hernandez-Avila, C. A., Rounsaville, B. J. & Kranzler, H. R. Opioid-, cannabis- and alcohol-dependent women show more rapid progression to substance abuse treatment. Drug Alcohol Depend. 74(3), 265272 (2004).

CAS PubMed Article Google Scholar

Greaves, L. & Hemsing, N. Sex and gender interactions on the use and impact of recreational cannabis. Int. J. Environ. Res. Public Health. 17(2), E509 (2020).

PubMed Article CAS Google Scholar

Spechler, P. A. et al. The initiation of cannabis use in adolescence is predicted by sex-specific psychosocial and neurobiological features. Eur. J. Neurosci. 50(3), 23462356 (2019).

PubMed Article Google Scholar

Becker, J. B., McClellan, M. L. & Reed, B. G. Sex differences, gender and addiction. J. Neurosci. Res. 95(12), 136147 (2017).

CAS PubMed PubMed Central Article Google Scholar

Lundberg, S. M. & Lee, S. I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30, 47654774 (2017).

Google Scholar

Koob, G. F. & Volkow, N. D. Neurobiology of addiction: A neurocircuitry analysis. Lancet Psychiatry 3(8), 760773 (2016).

PubMed PubMed Central Article Google Scholar

Bickel, W. K. et al. 21st century neurobehavioral theories of decision making in addiction: Review and evaluation. Pharmacol. Biochem. Behav. 164, 421 (2018).

CAS PubMed Article Google Scholar

Ycel, M. et al. A transdiagnostic dimensional approach towards a neuropsychological assessment for addiction: An international Delphi consensus study. Addiction 114(6), 10951109 (2019).

PubMed Article Google Scholar

Zilverstand, A. & Goldstein, R. Z. Chapter 3Dual models of drug addiction: the impaired response inhibition and salience attribution model. In Cognition and Addiction (ed. Verdejo-Garcia, A.) 1723 (Academic Press, 2020).

Chapter Google Scholar

Redish, A. D., Jensen, S. & Johnson, A. A unified framework for addiction: Vulnerabilities in the decision process. Behav. Brain Sci. 31(4), 415487 (2008).

PubMed PubMed Central Article Google Scholar

Rawls, E., Kummerfeld, E. & Zilverstand, A. An integrated multimodal model of alcohol use disorder generated by data-driven causal discovery analysis. Commun. Biol. 4(1), 112 (2021).

Article CAS Google Scholar

Meier, M. H. et al. Which adolescents develop persistent substance dependence in adulthood? Using population-representative longitudinal data to inform universal risk assessment. Psychol. Med. 46(4), 877889 (2016).

CAS PubMed Article Google Scholar

Khurana, A., Romer, D., Betancourt, L. M. & Hurt, H. Working memory ability and early drug use progression as predictors of adolescent substance use disorders. Addict. Abingt. Engl. 112(7), 12201228 (2017).

Article Google Scholar

Wilson, S., Malone, S. M., Venables, N. C., McGue, M. & Iacono, W. G. Multimodal indicators of risk for and consequences of substance use disorders: Executive functions and trait disconstraint assessed from preadolescence into early adulthood. Int. J. Psychophysiol. Off. J. Int. Organ. Psychophysiol. https://doi.org/10.1016/j.ijpsycho.2019.12.007 (2019).

Article Google Scholar

Meier, M. H. et al. Associations between adolescent cannabis use and neuropsychological decline: A longitudinal co-twin control study. Addict. Abingt. Engl. 113(2), 257265 (2018).

Article Google Scholar

Schlossarek, S., Kempkensteffen, J., Reimer, J. & Verthein, U. Psychosocial determinants of cannabis dependence: A systematic review of the literature. Eur. Addict. Res. 22(3), 131144 (2016).

PubMed Article Google Scholar

Defoe, I. N., Khurana, A., Betancourt, L., Hurt, H. & Romer, D. Disentangling longitudinal relations between youth cannabis use, peer cannabis use, and conduct problems: Developmental cascading links to cannabis use disorder. Addiction 114(3), 485493 (2019).

PubMed Article Google Scholar

Pingault, J. B. et al. Childhood trajectories of inattention, hyperactivity and oppositional behaviors and prediction of substance abuse/dependence: A 15-year longitudinal population-based study. Mol. Psychiatry. 18(7), 806812 (2013).

PubMed Article Google Scholar

Oshri, A., Rogosch, F. A., Burnette, M. L. & Cicchetti, D. Developmental pathways to adolescent cannabis abuse and dependence: Child maltreatment, emerging personality, and internalizing versus externalizing psychopathology. Psychol. Addict. Behav. 25(4), 634644 (2011).

PubMed PubMed Central Article Google Scholar

Griffith-Lendering, M. F. H., Huijbregts, S. C. J., Mooijaart, A., Vollebergh, W. A. M. & Swaab, H. Cannabis use and development of externalizing and internalizing behaviour problems in early adolescence: A TRAILS study. Drug Alcohol Depend. 116(1), 1117 (2011).

CAS PubMed Article Google Scholar

Farmer, R. F. et al. Internalizing and externalizing psychopathology as predictors of cannabis use disorder onset during adolescence and early adulthood. Psychol. Addict. Behav. 29(3), 541 (2015).

PubMed PubMed Central Article Google Scholar

Proctor, L. J. et al. Child maltreatment and age of alcohol and marijuana initiation in high-risk youth. Addict. Behav. 75, 6469 (2017).

PubMed PubMed Central Article Google Scholar

Mills, R., Kisely, S., Alati, R., Strathearn, L. & Najman, J. M. Child maltreatment and cannabis use in young adulthood: A birth cohort study. Addiction 112(3), 494501 (2017).

PubMed Article Google Scholar

Fridberg, D. J., Vollmer, J. M., ODonnell, B. F. & Skosnik, P. D. Cannabis users differ from non-users on measures of personality and schizotypy. Psychiatry Res. 186(1), 4652 (2011).

PubMed PubMed Central Article Google Scholar

Ketcherside, A., Jeon-Slaughter, H., Baine, J. L. & Filbey, F. M. Discriminability of personality profiles in isolated and co-morbid marijuana and nicotine users. Psychiatry Res. 238, 356362 (2016).

PubMed PubMed Central Article Google Scholar

Terracciano, A., Lckenhoff, C. E., Crum, R. M., Bienvenu, O. J. & Costa, P. T. Five-Factor Model personality profiles of drug users. BMC Psychiatry 8(1), 22 (2008).

PubMed PubMed Central Article Google Scholar

Creemers, H. E. et al. Predicting onset of cannabis use in early adolescence: The interrelation between high-intensity pleasure and disruptive behavior. The TRAILS Study. J. Stud. Alcohol Drugs. 70(6), 850858 (2009).

PubMed Article Google Scholar

Amlung, M., Vedelago, L., Acker, J., Balodis, I. & MacKillop, J. Steep delay discounting and addictive behavior: a meta-analysis of continuous associations. Addict. Abingt. Engl. 112(1), 5162 (2017).

Article Google Scholar

Strickland, J. C., Lee, D. C., Vandrey, R. & Johnson, M. W. A systematic review and meta-analysis of delay discounting and cannabis use. Exp. Clin. Psychopharmacol. https://doi.org/10.1037/pha0000378 (2020).

Article PubMed PubMed Central Google Scholar

Meier, M. H. et al. Persistent cannabis users show neuropsychological decline from childhood to midlife. Proc. Natl. Acad. Sci. 109(40), E2657E2664 (2012).

CAS PubMed PubMed Central Article Google Scholar

Gonzalez, R., Pacheco-Coln, I., Duperrouzel, J. C. & Hawes, S. W. Does cannabis use cause declines in neuropsychological functioning? A review of longitudinal studies. J. Int. Neuropsychol. Soc. JINS 23(910), 893902 (2017).

PubMed Article Google Scholar

Zilverstand, A., Huang, A. S., Alia-Klein, N. & Goldstein, R. Z. Neuroimaging impaired response inhibition and salience attribution in human drug addiction: A systematic review. Neuron 98(5), 886903 (2018).

CAS PubMed PubMed Central Article Google Scholar

Lorenzetti, V., Chye, Y., Silva, P., Solowij, N. & Roberts, C. A. Does regular cannabis use affect neuroanatomy? An updated systematic review and meta-analysis of structural neuroimaging studies. Eur. Arch. Psychiatry Clin. Neurosci. 269(1), 5971 (2019).

PubMed Article Google Scholar

Batalla, A. et al. Structural and functional imaging studies in chronic cannabis users: A systematic review of adolescent and adult findings. PLoS ONE 8(2), e55821 (2013).

ADS CAS PubMed PubMed Central Article Google Scholar

Maggs, J. L. et al. Predicting young adult degree attainment by late adolescent marijuana use. J. Adolesc. Health Off. Publ. Soc. Adolesc. Med. 57(2), 205211 (2015).

Article Google Scholar

Danielsson, A. K., Falkstedt, D., Hemmingsson, T., Allebeck, P. & Agardh, E. Cannabis use among Swedish men in adolescence and the risk of adverse life course outcomes: Results from a 20 year-follow-up study. Addict. Abingt. Engl. 110(11), 17941802 (2015).

Article Google Scholar

Green, K. M., Doherty, E. E. & Ensminger, M. E. Long-term consequences of adolescent cannabis use: Examining intermediary processes. Am. J. Drug Alcohol Abuse. 43(5), 567575 (2017).

PubMed Article Google Scholar

Verweij, K. J. H., Huizink, A. C., Agrawal, A., Martin, N. G. & Lynskey, M. T. Is the relationship between early-onset cannabis use and educational attainment causal or due to common liability?. Drug Alcohol Depend. 133(2), 580586 (2013).

PubMed Article Google Scholar

Wiley, J. L. & Burston, J. J. Sex differences in 9-tetrahydrocannabinol metabolism and in vivo pharmacology following acute and repeated dosing in adolescent rats. Neurosci. Lett. 576, 5155 (2014).

CAS PubMed PubMed Central Article Google Scholar

Narimatsu, S., Watanabe, K., Yamamoto, I. & Yoshimura, H. Sex difference in the oxidative metabolism of delta 9-tetrahydrocannabinol in the rat. Biochem. Pharmacol. 41(8), 11871194 (1991).

CAS PubMed Article Google Scholar

Harte-Hargrove, L. C. & Dow-Edwards, D. L. Withdrawal from THC during adolescence: Sex differences in locomotor activity and anxiety. Behav. Brain Res. 231(1), 4859 (2012).

CAS PubMed PubMed Central Article Google Scholar

Fattore, L., Spano, M., Altea, S., Fadda, P. & Fratta, W. Drug- and cue-induced reinstatement of cannabinoid-seeking behaviour in male and female rats: Influence of ovarian hormones. Br. J. Pharmacol. 160(3), 724735 (2010).

CAS PubMed PubMed Central Article Google Scholar

Fattore, L. et al. Cannabinoid self-administration in rats: Sex differences and the influence of ovarian function. Br. J. Pharmacol. 152(5), 795804 (2007).

CAS PubMed PubMed Central Article Google Scholar

Hill, M. N. et al. Endogenous cannabinoid signaling is essential for stress adaptation. Proc. Natl. Acad. Sci. U.S.A. 107(20), 94069411 (2010).

ADS CAS PubMed PubMed Central Article Google Scholar

Link:
Explainable machine learning analysis reveals sex and gender differences in the phenotypic and neurobiological markers of Cannabis Use Disorder |...

Machine Learning Week 4 – Updated Iowa Game by Game Projections, Season Record, and Championship Odds – Black Heart Gold Pants

Not familiar with BizarroMath? Youre in luck; Ive launched a web site for it where you can get an explanation of the numbers and browse the data.

Week 1

Week 2

Week 3

All lines courtesy of DraftKings Sportsbook as of 8:00am, Monday, September 19, 2022.

Iowa football continues to be the #1 supplier of high-quality material for the Sickos Committee.

This week, BizarroMath went 4-8 ATS and 5-7 O/U. Combined with the prior record of 11-8 and 6-13, respectively, the algorithm is 15-16 ATS and 11-20 O/U on the season after three full weeks of play. Not a great outing in the second straight strange week of Division I NCAA Football, but were still learning about these teams.

Vegas Says: MI -46.5, U/O 57.5

BizarroMath Says: MI -64.10 (MI cover), 71.66 (over)

Actual Outcome: MI 59, UCONN 0 (ATS hit, O/U hit)

One Sentence Recap: Michigan aint played nobody.

Vegas Says: OU -11.5, O/U 64.5

BizarroMath Says: OU -7.90 (NE cover), 60.03 (under)

Actual Outcome: OU 49, NE 14 (ATS miss, O/U hit)

One Sentence Recap: We should all be rooting for Mickey Josephs no-nonsense, just play the damn game style, which is a welcome departure from Scott Frosts chesty preening, but Nebraska still seems mired in a deep hole of undisciplined play and softness at the point of attack.

Vegas Says: n/a

BizarroMath Says: n/a

Actual Outcome: SILL 31, NU 24

One Sentence Recap: I watched the Salukis play many at a game at the UNI Dome in Cedar Falls over the years and I was probably less surprised than many that they pulled off this upset.

Vegas Says: Pk, O/U 58.5

BizarroMath Says: PUR -2.22 (Purdue win), O/U 47.9 (under)

Actual Outcome: SYR 32, PUR 29 (ATS miss, O/U miss)

One Sentence Recap: Much like the Penn State game, this game was there for the taking and Purdue simply refused, and I want to reiterate that Ive been skeptical since before the season began that the 2022 Edition of Purdue would be able to maintain the momentum from last year.

Vegas Says: IN -6.5, O/U 59.0

BizarroMath Says: WKY -12.90 (WKY cover), 65.99 (over)

Actual Outcome: IND 33, WKY 30 (ATS hit, O/U hit)

One Sentence Recap: BizMas prediction of a WKY upset damn near came true, but Tom Allens sweeping, must win now changing in the offseason seem to be paying dividends as the Hoosiers are figuring some things out and finding ways to win.

Vegas Says: RUT -17.5, O/U 44

BizarroMath: RUT -23.74 (RUT cover), 44.62 (over)

Actual Outcome: RUT 16, TEM 14 (ATS miss, O/U miss)

One Sentence Recap: Im pretty sure Rutgers is close to its pre-season O/U win total already, as the Scarlet Knights, like the other Eastern Red Team, keep finding ways to win.

Vegas Says: PSU -3, O/U 49

BizarroMath: PSU -2.71 (Auburn cover), 44.70 (under)

Actual Outcome: PSU 41, AUB 12 (ATS miss, O/U miss)

One Sentence Recap: It just means more.

Vegas Says: MN -27.5, O/U 46.5

BizarroMath: MN -23.95 (CO cover), 44.20 (under)

Actual Outcome: MN 49, CO 7 (ATS miss, O/U miss)

One Sentence Recap: Minnesota aint played nobody.

Vegas Says: WI -37.5, O/U 46.5

BizarroMath: WI -38.71 (WI cover), 50.1 (over)

Actual Outcome: WI 66, NMSU 7 (ATS hit, O/U hit)

One Sentence Recap: Theres nothing interesting about this game other than two observations: (1) this is the most points Wisconsin has scored in the Paul Chryst era; (2) Wisconsin has the same problem as Iowa in that Chryst has probably hit his ceiling and isnt going to elevate the program any further, but he wins too much to let him go.

Vegas Says: OSU -31.5, O/U 61

BizarroMath: OSU -28.36 (Toledo cover), 66.12 (over)

Actual Outcome: OSU 77, TOL 21 (ATS miss, O/U hit)

One Sentence Recap: OSUs opponent-adjusted yards surrendered this year is an absurd 2.28, which could be more a function of the small sample size we have for their opponents than anything, but this is why I blend data, folks.

Vegas Says: -MSU 3, O/U 57.5

BizarroMath: MSU -8.44 (MSU cover), 50.28 (under)

Actual Outcome: WA 39, MSU 28 (ATS miss, O/U miss)

One Sentence Recap: Ive told you my numbers dont like the Spartans, and Washington just showed us why.

Vegas Says: IA -23, O/U 40

BizarroMath: IA -2.48 (Nevada cover), 47.22 (over)

Actual Outcome: IA 27, NE 0 (ATS miss, O/U miss)

One Sentence Recap: Weird how when you inject a bunch of scholarship players back into your line-up, and play a defense of dubious quality, you can kind of, sort of, move the ball a little bit, even with an historically incompetent offense.

Vegas Says: MD -3.5, O/U 69.5

BizarroMath: SMU -1.31 (SMU cover), 75.22 (over)

Actual Outcome: MD 34, SMU 27 (ATS hit, O/U miss)

One Sentence Recap: First Four Games Maryland has scored 121 points through 3 games; I put the O/U on how many more games before Iowa breaks that mark at 5.5.

Now that I have the http://www.BizarroMath.com web site up and running, you can take a look at Iowas game-by-game projections and season projections yourself. Im going to not post the images this week and leave it to you to visit the site if you want to see the data. This is not a clickbait money scheme. There are no ads on that site, I wrote the HTML by hand because Im old and thats how I roll, and I make $0 off you visiting that site.

If you prefer to have the data presented in-line here, let me know, I will do that next week. Please answer the poll below to help me figure out how best to do this.

5%

21%

53%

14%

4%

1%

Also Caveat: If you come back to these links in the future, they will be updated with the results of future games, which also is a reason to post the data here for posterity, I suppose. Anyway, I may change the web site in the future to provide week-by-week updates showing the net changes. If youre interested in that, please let me know.

On to the analysis.

We finally have two FBS games worth of data on Iowa and can we start jumping to conclusions. Iowas raw PPG against D1 competition are at 17.0, which is good for #108 in the country. Iowas raw YPG are 243.50, which puts the Hawkeyes at #115. Iowas raw YPP are 4.20, ranking the Black and Old Gold at #110. The team is very slowly crawling out of the Division 1 cellar, but didnt exactly light the world on fire Saturday in a wet, frequently-interrupted outing against a Nevada team widely regarded as being Not Very Good.

We dont have enough data for opponent-adjustments for Iowa at this point (I require at least 3 adjustable games). Iowas blended data is what is used for the projections, and you can review that on the BizarroMath.com web site. Suffice it to say that Iowas outing against Nevada was similar in profile to what the team looked like last year. But, the schedule is a bit tougher this year, and Iowa needed some good fortune last year to make the Big 10 Championship game. I know nobody wants to hear it, but if this offense can climb up out of the triple digit rankings and get even to the 80th-90th range, that just might be enough to stay in the conference race.

But this season may simply boil down to schedule. Wisconsins cross-over games are @Ohio State, @MSU, and Maryland at home. Thats about as hard as it gets without playing either Michigan or Penn State. Minnesotas cross-over games are @Penn State, @Michigan State, and Rutgers. Iowas are @Rugers, Michigan, @Ohio State. From most to least difficult, Id say Iowa has the worst draw, then Wisconsin, then Minnesota. The Gophers also get Purdue and Iowa at home, and have Nebraska, Wisconsin, and Illinois on the road. The Badgers have Illinois and Purdue at home and go on the road to play Nebraska, Iowa, and (dont laugh) Northwestern. The Badgers are 1-6 at Northwestern this Century. The schedule generally favors the Gophers, and with Iowa playing Michigan and Ohio State in October, we shouldnt be surprised if the Hawkeyes are out of the division race before November.

That said, Iowas game-by-game odds are moving in the right direction. Iowa is a significant underdog vs. Michigan and Ohio State as expected, and a slight dog to Wisconsin and (stop traffic) Illinois. Perhaps most alarming is that the Hawkeyes have only a 37.92% chance to beat Minnesota. But! Recall that I am not doing opponent adjustments to the 2022 data yet for Minnesota, so their gaudy numbers are being taken at face value, and theyll drop after the Gophers play Michigan State this weekend.

To give you an idea of how that works, consider Michigan, which has played enough adjustable games that I can run opponent adjustments. Their opposition has been so terrible that BizarroMath discounts the Wolverines raw 55.33 PPG by a whopping 22.54 points. This means that this Wolverine team is expected to put up just 32.80 points against an average D1 defense, to say nothing of what they can do against a Top 5 defense, which Iowa just so happens to have (again, before opponent-adjustments). Michigans adjusted data is thus actually worse than last year, whose offense was, opponent-adjusted, worth 42.17 PPG.

Minnesotas adjustments will come soon enough, and well see them return to deep below the Earth, where filthy rodents belong. But, so, too, will Iowas, and of Iowas three adjustable opponents after this coming weekend - Rutgers, Nevada, and Iowa State - the Cyclones are by far the best team.

Iowa Season Projections

The Nevada win and swing in the statistics towards something more similar to last years putrid but still-better-than-this-crap offensive performance has brightened Iowas season outlook somewhat. Iowas most likely outcome now is 7-5 (27.13% chance), with 6-6 being more likely (25.89%) than 8-4 (17.52%). There is a 92.11% chance that Iowa doesnt reach 9.3, and a 78.42% chance that the Hawkeyes get bowl eligible this year.

The Gilded Rodents flashy numbers have pulled them almost even with Wisconsin, as the Badgers and Gophers are both in the 35-40% range for a division championship. Purdues continued struggles drops the Boilermakers to the four spot, elevating hapless Iowa to the third place in the West, though the Hawkeyes chances of actually winning the damn thing drop to 8.40%, Iowas climb up the division ladder from 5th to 3rd is more a function of the poor play of the teams now ranked lower than anything Iowa is doing on the field.

Im a bit puzzled by the conference race in the East, where Ohio State shot from last weeks 21.53% to this weeks 64.18% chance, but I think its because BizMa now has the Buckeyes with a 77.74% chance of winning The Game, which is the main shift that accounts for this change. Why? Well, this week we have opponent-adjustments for both teams and OSU has played a tougher schedule, so the Buckeyes numbers are not being discounted nearly as much as Michigans.

For example, on offense, OSU is putting up a raw 8.49 YPP, which BizMa is actually adjusting up to 9.58. Michigan, by comparison, is putting up 7.97 YPP, but BizMa is adjusting it down to 6.36 YPP based on the competition. As we move into the conference slate and the quality of each teams opposition evens out, well probably see those numbers flatten out a bit.

I love week 4. Because the number of games I have to track is cut in half.

Vegas Says: n/a

BizarroMath: n/a

One Sentence Prediction: Your Fighting Illini are going to be 3-1 going into conference play, and they have been competitive, if a bit raggety.

Vegas Says: MI -17, O/U 62.5

BizarroMath: MI -3.81, O/U 58.01 (MD cover, under)

One Sentence Prediction: BizMa sees this game as being much closer than Vegas does, and I think the difference might be a function of where we are in the season, as I just dont see Marylands defense holding Michigan down, and I dont buy that under for even a second, folks.

Vegas Says: PSU -26, O/U 60.5

BizarroMath: PSU -30.47, O/U 58.97 (PSU cover, under)

One Sentence Prediction: I dont know a thing about Central Michigan this year but a final along the lines of 45-13 sounds about right.

Vegas Says: MN -2, O/U 51.0

BizarroMath: MN -8.75, O/U 45.49 (MN cover, under)

One Sentence Prediction: Well soon know if the Gilded Rodents are fools gold, but not this week, as I think Minnesota is going to put up some points here in something like a 42-23 affair.

Vegas Says: CIN -15.5, O/U 54.0

BizarroMath: CIN -25.04, O/U 53.90 (CIN cover, under)

One Sentence Prediction: The Hoosiers either crash hard back down to Terra Firma in an embarrassing road rout, or this winds up being an unexpectedly knotty game.

Vegas Says: IA -7.5, O/U 35.5

BizarroMath: IA -9.85, O/U 32.09 (IA cover, under)

One Sentence Prediction: In Assy Football, the MVP is from one of two separate, yet equally important, groups: the punt team, which establishes poor field position for the opposition; and the punt return team, who try to field the ball outside of the 15 yard line without turning it over; this is their magnum opus.

Vegas Says: OSU -17.5, O/U 56.5

See the original post:
Machine Learning Week 4 - Updated Iowa Game by Game Projections, Season Record, and Championship Odds - Black Heart Gold Pants

Apple won 59 Patents today covering the Gesture Recognition ‘AssistiveTouch’ Feature for Apple Watch & more – Patently Apple

Today the U.S. Patent and Trademark Office officially published a series of 59 newly granted patents for Apple Inc. In this particular report we briefly cover two Apple Watch patents. The first covers Machine Learning in context with the Apple Watch Assistive Touch feature that recognizes hand gestures. The second patent covers strengthening the Apple Watch cover glass. And as always, we wrap up this week's granted patent report with our traditional listing of the remaining granted patents that were issued to Apple today.

Machine-learning based Gesture Recognition using Multiple SensorsON Apple Watch

Earlier this month, Patently Apple posted a report titled "Apple's VP and Managing Director of Greater China delivered a Keynote at the Shanghai World Artificial Intelligence Conference." In the report we pointed to Apple's vice president and managing director of Greater China, Ge Yue, point out Apple use of Machine Learning in Apple Watch.

Yue stated that "Assistive Touch on the Apple Watch allows users with limited mobility to control the Apple Watch through gestures.Instead of tapping the display, the feature combines on-device machine learning with data from the Apple Watch's built-in sensors to help detect subtle differences in muscle movement and tendon activity.By including a gyroscope, accelerometer, and optical heart rate sensor, users can control the Apple Watch with hand movements such as pinching or making a fist.

Today the U.S. Patent and Trademark Office officially granted Apple the patent behind assistive touch that uses gesture recognition, including machine-learning based gesture recognition primarily on Apple Watch.

Apple's patent FIG. 2 below illustrates an example device that may implement a system for machine-learning based gesture recognition; FIG. 3 illustrates an example architecture, that may be implemented by an electronic device, for machine-learning based gesture recognition.

Apple's patent FIGS. 4A-4B above illustrate example diagrams of respective sensor outputs of an Apple Watch that may indicate a gesture.

Review granted patent 11,449,802 for more details that are behind the Apple Watch Assistive Touch feature.

Strengthened Cover for Apple Watch

some conventional glass covers may warp during chemical strengthening. Warpage of the cover can make it more difficult to form a seal between the glass cover and another part of the electronic device housing. The tendency for warpage can increase with increasing amounts of chemical strengthening.

Aspects of the following disclosure relate to methods for chemically strengthening a cover for an electronic device. In embodiments, the cover defines a mounting surface that forms a seal with an enclosure component of the electronic device. In embodiments, the cover has a three-dimensional (3D) shape that includes a flange which defines the mounting surface.

The cover may be formed of an ion-exchangeable material, such as a glass. In some embodiments, the methods include at least two ion-exchange operations and an intermediate operation of locally removing material from the mounting surface of the cover.

Apple's patent FIG. 1B below depicts an example exploded view of an Apple Watch. The cover (#120) includes a central portion (#140) and a flange portion #150; FIG. 6 depicts a flowchart of an additional example method for producing a chemically strengthened cover.

For more details, review Apple's granted patent 11,447,416.

Earlier today Patently Apple posted four Granted patent reports as follows:

01: Apple wins a Patent for a Scene Camera System for a Mixed Reality Headset that includes a 2-Dimensional Array of Cameras

02: Apple wins a Patent for devices that will provide users with displays that angle content so as to create a form of Privacy Mode

03: Apple has won two Project Titan Patents relating to Sliding Doors, Unfolding Sunroof Panels & Reinforced Windows, and

04: Apple wins a Patent for a Mixed Reality HMD that could assist those with various stages of Alzheimer's Disease

Todays Remaining Granted Patents

Go here to read the rest:
Apple won 59 Patents today covering the Gesture Recognition 'AssistiveTouch' Feature for Apple Watch & more - Patently Apple

NIH Grant to Fund Development of Pulmonary Hypertension Algorithm – Healthcare Innovation

Digital health company Eko has received $2.7 million in grant funding to develop a machine learning algorithm that detects and stratifies pulmonary hypertension (PH) using phonocardiogram (PCG) and electrocardiogram (ECG) data provided by Ekos smart stethoscopes.The Small Business Innovation Research (SBIR) Direct Phase II grant is provided by the National Institutes of Healths (NIH) Department of Health and Human Services (HHS).

Pulmonary hypertension (PH) is a severe condition that occurs when the pressure in the vessels that carry blood from the heart to the lungs is higher than normal, causing undo stress on the heart. PH affects up to 1 percent of the global population and is a marker of poor health outcomes. PH can cause premature disability, heart failure, and death. Unfortunately, delays of over two years frequently occur between the onset of symptoms and diagnosis of severe kinds of PH.

The gold standards for diagnosing PH are echocardiography and right heart catheterization, which are costly, invasive, and require a heart specialist. ECG-based AI models have been clinically proven to improve the diagnosis of PH but are challenging to deploy.

To address this challenge, Oakland, California-based Eko formed a research partnership with Lifespan Health Systems Cardiovascular Institute to collect real-world PCG and ECG data using the Eko DUO ECG + Digital Stethoscope. The company said this data will help develop an algorithm that can detect PH and stratify its severity. This early identification tool aims to diagnose PH earlier and more accurately, leading to beneficial interventions that can save patients lives.

The major goal of this study is to determine whether an Eko algorithm based on phonocardiography coupled with electrocardiography can identify the presence and severity of pulmonary hypertension when compared to the current gold standard, said Gaurav Choudhary, M.D., principal investigator and Ruth and Paul Levinger Professor of Cardiology and Director of Cardiovascular Research at the Alpert Medical School of Brown University and Lifespan Cardiovascular Institute, in a statement. This machine learning algorithm has the potential to be a low-cost, easily implementable, and sustainable medical technology that assists healthcare professionals in identifying more patients with pulmonary hypertension.

This award marks Ekos fourth SBIR grant from the NIH, bringing their total funding to date from the NIH for cardiopulmonary machine learning development to $6 million. A previous $2.7M grant, awarded to the company in July of 2020, funded the collaborative work with Northwestern Medicine Bluhm Cardiovascular Institute to validate algorithms that help healthcare professionals identify pathologic heart murmurs and valvular heart disease (VHD) during routine office visits. That grant for VHD directly contributed to the FDA clearance and commercialization of Eko Murmur Analysis Software (EMAS), which the company says is the first and only machine learning algorithm to assist providers in identifying structural heart murmurs using a smart stethoscope.

See the rest here:
NIH Grant to Fund Development of Pulmonary Hypertension Algorithm - Healthcare Innovation

Of God and Machines – The Atlantic

This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic, Monday through Friday. Sign up for it here.

Miracles can be perplexing at first, and artificial intelligence is a very new miracle. Were creating God, the former Google Chief Business Officer Mo Gawdat recently told an interviewer. Were summoning the demon, Elon Musk said a few years ago, in a talk at MIT. In Silicon Valley, good and evil can look much alike, but on the matter of artificial intelligence, the distinction hardly matters. Either way, an encounter with the superhuman is at hand.

Early artificial intelligence was simple: Computers that played checkers or chess, or that could figure out how to shop for groceries. But over the past few years, machine learningthe practice of teaching computers to adapt without explicit instructionshas made staggering advances in the subfield of Natural Language Processing, once every year or so. Even so, the full brunt of the technology has not arrived yet. You might hear about chatbots whose speech is indistinguishable from humans, or about documentary makers re-creating the voice of Anthony Bourdain, or about robots that can compose op-eds. But you probably dont use NLP in your everyday life.

Or rather: If you are using NLP in your everyday life, you might not always know. Unlike search or social media, whose arrivals the general public encountered and discussed and had opinions about, artificial intelligence remains esotericevery bit as important and transformative as the other great tech disruptions, but more obscure, tucked largely out of view.

Science fiction, and our own imagination, add to the confusion. We just cant help thinking of AI in terms of the technologies depicted in Ex Machina, Her, or Blade Runnerpeople-machines that remain pure fantasy. Then theres the distortion of Silicon Valley hype, the general fake-it-til-you-make-it atmosphere that gave the world WeWork and Theranos: People who want to sound cutting-edge end up calling any automated process artificial intelligence. And at the bottom of all of this bewilderment sits the mystery inherent to the technology itself, its direct thrust at the unfathomable. The most advanced NLP programs operate at a level that not even the engineers constructing them fully understand.

But the confusion surrounding the miracles of AI doesnt mean that the miracles arent happening. It just means that they wont look how anybody has imagined them. Arthur C. Clarke famously said that technology sufficiently advanced is indistinguishable from magic. Magic is coming, and its coming for all of us.

All technology is, in a sense, sorcery. A stone-chiseled ax is superhuman. No arithmetical genius can compete with a pocket calculator. Even the biggest music fan you know probably cant beat Shazam.

But the sorcery of artificial intelligence is different. When you develop a drug, or a new material, you may not understand exactly how it works, but you can isolate what substances you are dealing with, and you can test their effects. Nobody knows the cause-and-effect structure of NLP. Thats not a fault of the technology or the engineers. Its inherent to the abyss of deep learning.

I recently started fooling around with Sudowrite, a tool that uses the GPT-3 deep-learning language model to compose predictive text, but at a much more advanced scale than what you might find on your phone or laptop. Quickly, I figured out that I could copy-paste a passage by any writer into the programs input window and the program would continue writing, sensibly and lyrically. I tried Kafka. I tried Shakespeare. I tried some Romantic poets. The machine could write like any of them. In many cases, I could not distinguish between a computer-generated text and an authorial one.

A quotation from this story, as interpreted and summarized by Googles OpenAI software.

I was delighted at first, and then I was deflated. I was once a professor of Shakespeare; I had dedicated quite a chunk of my life to studying literary history. My knowledge of style and my ability to mimic it had been hard-earned. Now a computer could do all that, instantly and much better.

A few weeks later, I woke up in the middle of the night with a realization: I had never seen the program use anachronistic words. I left my wife in bed and went to check some of the texts Id generated against a few cursory etymologies. My bleary-minded hunch was true: If you asked GPT-3 to continue, say, a Wordsworth poem, the computers vocabulary would never be one moment before or after appropriate usage for the poems era. This is a skill that no scholar alive has mastered. This computer program was, somehow, expert in hermeneutics: interpretation through grammatical construction and historical context, the struggle to elucidate the nexus of meaning in time.

The details of how this could be are utterly opaque. NLP programs operate based on what technologists call parameters: pieces of information that are derived from enormous data sets of written and spoken speech, and then processed by supercomputers that are worth more than most companies. GPT-3 uses 175 billion parameters. Its interpretive power is far beyond human understanding, far beyond what our little animal brains can comprehend. Machine learning has capacities that are real, but which transcend human understanding: the definition of magic.

This unfathomability poses a spiritual conundrum. But it also poses a philosophical and legal one. In an attempt to regulate AI, the European Union has proposed transparency requirements for all machine-learning algorithms. Eric Schmidt, the ex-CEO of Google, noted that such requirements would effectively end the development of the technology. The EUs plan requires that the system would be able to explain itself. But machine-learning systems cannot fully explain how they make their decisions, he said at a 2021 summit. You use this technology to think through what you cant; thats the whole point. Inscrutability is an industrial by-product of the process.

Sorry, this animated vignette failed to load.

My little avenue of literary exploration is my own, and neither particularly central nor relevant to the unfolding power of artificial intelligence (although I can see, off the top of my head, that the tech I used will utterly transform education, journalism, film, advertising, and publishing). NLP has made its first strides into visual arts tooDall-E 2 has now created a limitless digital museum of AI-generated images drawn from nothing more than prompts.

Others have headed into deeper waters. Schmidt recently proposed a possible version of our AI future in a conversation with this magazines executive editor, Adrienne LaFrance: If you imagine a child born today, you give the child a baby toy or a bear, and that bear is AI-enabled, he said. And every year the child gets a better toy. Every year the bear gets smarter, and in a decade, the child and the bear who are best friends are watching television and the bear says, I dont really like this television show. And the kid says, Yeah, I agree with you.

Schmidts vision does not yet exist. But in late 2020, Microsoft received a patent for chatbots that bring back the dead, using inputs from images, voice data, social media posts, electronic messages, written letters, etc. to create or modify a special index in the theme of the specific persons personality. Soon after, a company called Project December released a version of just such a personality matrix. It created bots such as William, which speaks like Shakespeare, and Samantha, a rather bland female companion. But it also allowed mourners to re-create dead loved ones. An article in the San Francisco Chronicle told the story of Joshua Barbeau, who created a bot of his deceased fiance, Jessica Pereira. Their conversation started like this:

Joshua: Technically, Im not really talking to you

Jessica: Huh?

Joshua: Youre a ghost.

Jessica: *looks at herself* How?

Joshua: Magic.

Jessica: I dont like magic. Where am I?

Joshua: Im Not sure. The internet? Where does it feel like you are?

Jessica: *smiles* Everywhere and nowhere. *cups her head with her hands* I am trying to work that out. How are you, honey?

Barbeaus conversation with Jessica continued for several months. His experience of Project December was far from perfectthere were glitches, there was nonsense, the bots architecture decayedbut Barbeau really felt like he was encountering some kind of emanation of his dead fiance. The technology, in other words, came to occupy a place formerly reserved for mediums, priests, and con artists. It may not be the first intelligent machine, Jason Rohrer, the designer of Project December, has said, but it kind of feels like its the first machine with a soul.

Sorry, this animated vignette failed to load.

What we are doing is teaching computers to play every language game that we can identify. We can teach them to talk like Shakespeare, or like the dead. We can teach them to grow up alongside our children. We can certainly teach them to sell products better than we can now. Eventually, we may teach them how to be friends to the friendless, or doctors to those without care.

PaLM, Googles latest foray into NLP, has 540 billion parameters. According to the engineers who built it, it can summarize text, reason through math problems, use logic in a way thats not dissimilar from the way you and I do. These engineers also have no idea why it can do these things. Meanwhile, Google has also developed a system called Player of Games, which can be used with any game at allgames like Go, exercises in pure logic that computers have long been good at, but also games like poker, where each party has different information. This next generation of AI can toggle back and forth between brute computation and human qualities such as coordination, competition, and motivation. It is becoming an idealized solver of all manner of real-world problems previously considered far too complicated for machines: congestion planning, customer service, anything involving people in systems. These are the extremely early green shoots of an entire future tech ecosystem: The technology that contemporary NLP derives from was only published in 2017.

And if AI harnesses the power promised by quantum computing, everything Im describing here would be the first dulcet breezes of a hurricane. Ersatz humans are going to be one of the least interesting aspects of the new technology. This is not an inhuman intelligence but an inhuman capacity for digital intelligence. An artificial general intelligence will probably look more like a whole series of exponentially improving tools than a single thing. It will be a whole series of increasingly powerful and semi-invisible assistants, a whole series of increasingly powerful and semi-invisible surveillance states, a whole series of increasingly powerful and semi-invisible weapons systems. The world would change; we shouldnt expect it to change in any kind of way that you would recognize.

Our AI future will be weird and sublime and perhaps we wont even notice it happening to us. The paragraph above was composed by GPT-3. I wrote up to And if AI harnesses the power promised by quantum computing; machines did the rest.

Technology is moving into realms that were considered, for millennia, divine mysteries. AI is transforming writing and artthe divine mystery of creativity. It is bringing back the deadthe divine mystery of resurrection. It is moving closer to imitations of consciousnessthe divine mystery of reason. It is piercing the heart of how language works between peoplethe divine mystery of ethical relation.

All this is happening at a raw moment in spiritual life. The decline of religion in America is a sociological fact: Religious identification has been in precipitous decline for decades. Silicon Valley has offered two replacements: the theory of the simulation, which postulates that we are all living inside a giant computational matrix, and of the singularity, in which the imminent arrival of a computational consciousness will reconfigure the essence of our humanity.

Like all new faiths, the tech religions cannibalize their predecessors. The simulation is little more than digital Calvinism, with an omnipotent divinity that preordains the future. The singularity is digital messianism, as found in various strains of Judeo-Christian eschatologya pretty basic onscreen Revelation. Both visions are fundamentally apocalyptic. Stephen Hawking once said that the development of full artificial intelligence could spell the end of the human race. Experts in AI, even the men and women building it, commonly describe the technology as an existential threat.

But we are shockingly bad at predicting the long-term effects of technology. (Remember when everybody believed that the internet was going to improve the quality of information in the world?) So perhaps, in the case of artificial intelligence, fear is as misplaced as that earlier optimism was.

AI is not the beginning of the world, nor the end. Its a continuation. The imagination tends to be utopian or dystopian, but the future is humanan extension of what we already are. My own experience of using AI has been like standing in a river with two currents running in opposite directions at the same time: Alongside a vertiginous sense of power is a sense of humiliating disillusionment. This is some of the most advanced technology any human being has ever used. But of 415 published AI tools developed to combat COVID with globally shared information and the best resources available, not one was fit for clinical use, a recent study found; basic errors in the training data rendered them useless. In 2015, the image-recognition algorithm used by Google Photos, outside of the intention of its engineers, identified Black people as gorillas. The training sets were monstrously flawed, biased as AI very often is. Artificial intelligence doesnt do what you want it to do. It does what you tell it to do. It doesnt see who you think you are. It sees what you do. The gods of AI demand pure offerings. Bad data in, bad data out, as they say, and our species contains a great deal of bad data.

Artificial intelligence is returning us, through the most advanced technology, to somewhere primitive, original: an encounter with the permanent incompleteness of consciousness. Religions all have their approaches to magictransubstantiation for Catholics, the lost temple for the Jews. Even in the most scientific cultures, there is always the beyond. The acropolis in Athens was a fortress of wisdom, a redoubt of knowledge and the power it bringsthrough agriculture, through military victory, through the control of nature. But if you wanted the inchoate truth, you had to travel the road to Delphi.

A fragment of humanity is about to leap forward massively, and to transform itself massively as it leaps. Another fragment will remain, and look much the same as it always has: thinking meat in an inconceivable universe, hungry for meaning, gripped by fascination. The machines will leap, and the humans will look. They will answer, and we will question. The glory of what they can do will push us closer and closer to the divine. They will do things we never thought possible, and sooner than we think. They will give answers that we ourselves could never have provided. But they will also reveal that our understanding, no matter how great, is always and forever negligible. Our role is not to answer but to question, and to let our questioning run headlong, reckless, into the inarticulate.

See more here:
Of God and Machines - The Atlantic