Nobel for Greta Thunberg? In the age of climate change, coronavirus, it is possible – Deccan Herald

This year's Nobel Peace Prize could go to green campaigner Greta Thunberg and the Fridays for Future movement to highlight the link between environmental damage and the threat to peace and security, experts say.

The winner of the $1 million prize, arguably the world's top accolade, will be announced in Oslo on Oct. 9 from a field of 318 candidates. The prize can be split up to three ways.

The Swedish 17-year-old was nominated by three Norwegian lawmakers and two Swedish parliamentarians and if she wins, she would receive it at the same age as Pakistan's Malala Yousafzai, the youngest Nobel laureate thus far.

Asle Sveen, a historian and author of several books about the prize, said Thunberg would be a strong candidate for this year's award, her second nomination in as many years, with the USWest Coast wildfires and rising temperatures in the Arctic "leaving people in no doubt" about global warming.

"Not a single person has done more to get the world to focus on climate change than her," Sveen told Reuters.

The committee has given the prize to environmentalists before, starting with Kenya's Wangari Maathai in 2004 for her campaign to plant 30 million trees across Africa, and in 2007 to Al Gore and the Intergovernmental Panel on Climate Change.

Also read: Officials, activists blame communications failure for climate inaction

In the era of the coronavirus crisis, the committee could also choose to highlight the threat of pandemics to peace and security, said Dan Smith, the director of the Stockholm International Peace Research Institute.

"There is a relationship between environmental damage and our increasing problem with pandemics and I wonder whether the Nobel Peace Prize Committee might want to highlight that," he told Reuters.

If the committee wanted to highlight this trend, he said, "there is obviously the temptation of Greta Thunberg".

The Fridays for Future movement started in 2018 when Thunberg began a school strike in Sweden to push for action on climate. It has since become a global protest.

Thunberg and her father Svante, who sometimes handles media queries for her, did not reply to requests for comment.

Many were sceptical when Greta, as she is often referred to, became the bookmaker's favourite to win last year's Nobel Peace Prize, especially with regards to her age, but her second nomination could strengthen her chances.

"Greta is re-nominated, which was the case for Malala. I said Malala was young when she was nominated the first time and I said Greta was young the first time she was nominated," Sveen said.

Yousafzai won in 2014.

Not Trump

Other known candidates included the "people of Hong Kong", NATO, Julian Assange, Chelsea Manning and Edward Snowden and jailed Saudi activist Loujain al-Hathloul.

Other possible choices are Reporters Without Borders, Angela Merkel and the World Health Organization, experts said, though it is unclear whether they are nominated.

Nominations are secret for 50 years but those who nominate can choose to publicise their choices. Thousands of people are eligible to nominate, including members of parliaments and governments, university professors and past laureates.

It is not known whether Donald Trump is nominated for this year's prize, though he is up for next year's award after a Norwegian lawmaker named the USPresident for helping broker a deal between Israel and the United Arab Emirates.

He is unlikely to win, Sveen and Smith agreed, not least for his dismantling of the international treaties to limit the proliferation of nuclear weapons, a cause dear to Nobel committees.

"He is divisive and seems to not take a clear stance against the violence the right wing perpetrates in the US," said Smith.

"And that is just the first list."

See more here:
Nobel for Greta Thunberg? In the age of climate change, coronavirus, it is possible - Deccan Herald

US military ‘obliterated two journalists in Apache helicopter attack then covered it up’ – Mirror.co.uk

The US military 'obliterated' two Reuters journalists in Apache helicopter attack in Iran and covered up what they had done, the Old Bailey heard.

An ex bureau chief for the news agency, who developed PTSD and now works as a trauma counsellor, was the last witness in the second week of Assange's extradition hearing.

Dean Yates told in a statement of the 'full horror' of 'Collateral Murder' - the video WikiLeaks released in 2010 which showed US soldiers laughing as they fired weapons from the helicopter.

His statement was read by Assange's barrister Edward Fitzgerald, who was reprimanded several times by the judge for wandering off-topic.

The statement said: "Early on July 12 2007 I was at my desk in the Reuters office in Baghdad's red zone suddenly loud wailing broke out near the back of our office.

"I still remember the anguished face of the Iraqi colleague who burst through the door he said Nami and Saeed have been killed.

"Namir photographer had told colleagues he was going to check out a possible US dawn airstrike Saeed, a driver/fixer [went with him].

"It was my task at the same time as trying to find out what had happened to file a news story about the deaths.

"After midnight the US military released a statement (that said) 'Coalition Forces were clearly engaged in combat operations against a hostile force'. I updated my story.

"Reuters staff had by now spoken to 14 witnesses in [the area] al-Amin. All of them said they were unaware of any firefight that might have promoted the helicopter strike.

"The Iraqi staff at Reuters were concerned that the bureau was too soft on the US military.

"But I could only write what we could establish and the US military was insisting Saeed and Namir were killed during a clash.

"Crazy Horse 1-8 [the helicopter] requested permission to fire after seeing a group of 'military-aged males' who appeared to have weapons and were acting suspiciously."

The statement added that there was debate over what led the Apache to open fire if there was no firefight.

It added that the men were seen to be 'expressing hostile intent' because they were apparently armed.

"They said 'OK we are going to show you a little bit of footage'," the statement said.

"I can see Namir crouching down with his camera which the pilot thinks is an RPG.

"The cannon fire hits them. The generals stopped the tape."

The judge interjected: "This is of no relevance."

Mr Fitzgerald: "It's against the backdrop of denial that the video is important... They ask for information and there is three denials. There was an FOI application denied. WikiLeaks release the Collateral Murder video on April 5 2010."

Returning to the statement, he read: "Namir and Saeed can be seen with a group of men in a street [weapons] are pointed down. The men walk about casually.

"Crazy Horse 1-8 seeks and gets permission from the ground unit to attack. At that moment, however, the crew's line of sight is blocked by houses. Some 20 seconds later Namir can be seen crouched down with his long lens camera raised.

"Here was the full horror - Saeed had been trying to get up for roughly three minutes when a Good Samaritan pulls over in his minivan and the Apache opens fire again and just obliterates them - it was totally traumatising.

"I immediately realised that the US Military had lied to us I feel cheated, they were not being honest."

Mr Fitzgerald said: "Had it not been for Chelsea Manning and Julian Assange the truth of what happened on that street in Baghdad would not have been brought to the world. What he did was 100 per cent truth telling... how the US military behaved and lied."

The hearing was adjourned until Monday.

Read the rest here:
US military 'obliterated two journalists in Apache helicopter attack then covered it up' - Mirror.co.uk

What is Imblearn Technique – Everything To Know For Class Imbalance Issues In Machine Learning – Analytics India Magazine

In machine learning, while building a classification model we sometimes come to situations where we do not have an equal proportion of classes. That means when we have class imbalance issues for example we have 500 records of 0 class and only 200 records of 1 class. This is called a class imbalance. All machine learning models are designed in such a way that they should attain maximum accuracy but in these types of situations, the model gets biased towards the majority class and will, at last, reflect on precision and recall. So how to build a model on these types of data set in a manner that the model should correctly classify the respective class and does not get biased.

To get rid of these imbalance class issues few techniques are used called as Imblearn Technique that is mainly used in these types of situations. Imblearn techniques help to either upsample the minority class or downsample the majority class to match the equal proportion. Through this article, we will discuss imblearn techniques and how we can use them to do upsampling and downsampling. For this experiment, we are using Pima Indian Diabetes data since it is an imbalance class data set. The data is available on Kaggle for downloading.

What we will learn from this article?

Class imbalance issues are the problem when we do not have equal ratios of different classes. Consider an example if we had to build a machine learning model that will predict whether a loan applicant will default or not. The data set has 500 rows of data points for the default class but for non-default we are only given 200 rows of data points. When we will build the model it is obvious that it would be biased towards the default class because its the majority class. The model will learn how to classify default classes in a more good manner as compared to the default. This will not be called as a good predictive model. So, to resolve this problem we make use of some techniques that are called Imblearn Techniques. They help us to either reduce the majority class as default to the same ratio as non-default or vice versa.

Imblearn techniques are the methods by which we can generate a data set that has an equal ratio of classes. The predictive model built on this type of data set would be able to generalize well. We mainly have two options to treat an imbalanced data set that are Upsampling and Downsampling. Upsampling is the way where we generate synthetic data so for the minority class to match the ratio with the majority class whereas in downsampling we reduce the majority class data points to match it to the minority class.

Now lets us practically understand how upsampling and downsampling is done. We will first install the imblearn package then import all the required libraries and the pima data set. Use the below code for the same.

As we checked there are a total of 500 rows that falls under 0 class and 268 rows that are present in 1 class. This results in an imbalance data set where the majority of the data points lie in 0 class. Now we have two options either use upsampling or downsampling. We will do both and will check the results. We will first divide the data into features and target X and y respectively. Then we will divide the data set into training and testing sets. Use the below code for the same.

X = df.values[:,0:7]

y = df.values[:,8]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=7)

Now we will check the count of both the classes in the training data and will use upsampling to generate new data points for minority classes. Use the below code to do the same.

print("Count of 1 class in training set before upsampling :" ,(sum(y_train==1)))

print("Count of 0 class in training set before upsampling :",format(sum(y_train==0)))

We are using Smote techniques from imblearn to do upsampling. It generates data points based on the K-nearest neighbor algorithm. We have defined k = 3 whereas it can be tweaked since it is a hyperparameter. We will first generate the data point and then will compare the counts of classes after upsampling. Refer to the below code for the same.

smote = SMOTE(sampling_strategy = 1 ,k_neighbors = 3, random_state=1)

X_train_new, y_train_new = smote.fit_sample(X_train, y_train.ravel())

print("Count of 1 class in training set after upsampling :" ,(sum(y_train_new==1)))

print("Count of 0 class in training set after upsampling :",(sum(y_train_new==0)))

Now the classes are balanced. Now we will build a model using random forest on the original data and then the new data. Use the below code for the same.

Now we will downsample the majority class and we will randomly delete the records from the original data to match the minority class. Use the below code for the same.

random = np.random.choice( Non_diabetic_indices, Non_diabetic 200 , replace=False)

down_sample_indices = np.concatenate([Diabetic_indices,random])

Now we will again divide the data set and will again build the model. Use the below code for the same.

Conclusion

In this article, we discussed how we can pre-process the imbalanced class data set before building predictive models. We explored Imblearn techniques and used the SMOTE method to generate synthetic data. We first did up sampling and then performed down sampling. There are again more methods present in imblean techniques like Tomek links and Cluster centroid that also can be used for the same problem. You can check the official documentation here.

Also check this article Complete Tutorial on Tkinter To Deploy Machine Learning Model that will help you to deploy machine learning models.

comments

Read the original:
What is Imblearn Technique - Everything To Know For Class Imbalance Issues In Machine Learning - Analytics India Magazine

Global Machine Learning Market Tends To Show Steady Growth Post Pandemic With Regional Overview and Top Key Players – Verdant News

The research study on Machine Learning Market added byReportspediapresents an extensive analysis of current Machine Learning Market size, drivers, trends, opportunities, challenges, as well as key market segments. In continuation of this data, the Machine Learning Market report covers various marketing strategies followed by key players and distributors.

During the estimated period, the report also mentions the predictable CAGR of the global Machine Learning Market. The report provides readers with accurate past statistics and predictions of the future. In order to get an in-depth overview of Global Machine Learning Market is valued at USD XX million in 2020 and is predictable to reach USD XX million by the end of 2027, growing at a CAGR of XX% between 2020 and 2027.

Free Sample PDF Copy Here @:

https://www.reportspedia.com/report/technology-and-media/2015-2027-global-machine-learning-industry-market-research-report,-segment-by-player,-type,-application,-marketing-channel,-and-region/57400#request_sample

Top Key Players:

Luminoso Technologies, Inc.Hewlett Packard Enterprise Development LPSAS Institute Inc.RapidMiner, Inc.Angoss Software CorporationAmazon Web Services Inc.TIBCO Software Inc.DataikuBigML, Inc.Oracle CorporationFractal Analytics Inc.Fair Isaac CorporationDomino Data Lab, Inc.TrademarkVisionGoogle, Inc.Alpine DataTeradataIBM CorporationDell Inc.Baidu, Inc.Intel CorporationKNIME.com AGSAP SEMicrosoft Corporation

The report on Machine Learning market is also provided, details of the company enclosed, SWOT analysis, and PESTEL, Porters five forces, and product life cycle. In the start, the report offers a basic introduction of the Machine Learning industry containing its definition, applications and production technique. Then, the report illustrates the international key Machine Learning industry players in detail.

Geographical Analysis of Machine Learning Market:

Ask For [emailprotected]:

https://www.reportspedia.com/discount_inquiry/discount/57400

Machine Learning Market Segmentation:

Machine Learning Market Segmentation By Type:

CloudOn-Premises

Machine Learning Market Segmentation By Application:

BFSIHealthcare and Life SciencesRetailTelecommunicationGovernment and DefenseManufacturingEnergy and Utilities

Global Machine Learning Market: Competitive Analysis

This section of the report identifies a variety of key manufacturers in the market. It helps the reader know the strategies and collaboration that players are focus on combat competition in the market. The wide-ranging report provides a major microscopic look at the market. The reader can discover the footprints of the manufacturers by knowing about the global revenue of manufacturers and sales by manufacturers during the forecast period of 2020 to 2027.

In this Machine Learning market study, the following years are considered to project the market footprint:

History Year:2014 2018

Base Year:2018

Estimated Year:2019

Forecast Year:2020 2027

Do You Have Any Query Or Specific Requirement? Ask to Our Industry [emailprotected]:

https://www.reportspedia.com/report/technology-and-media/2015-2027-global-machine-learning-industry-market-research-report,-segment-by-player,-type,-application,-marketing-channel,-and-region/57400#inquiry_before_buying

Machine Learning market research addresses the following queries:

Main points of the table of contents:

Chapter One: Report Overview

Chapter Two: Trends in Global Growth

Chapter Three: Market Share of Major Players

Chapter Four: Distribution by Type and Application

Chapter Five: United States

Chapter Six: Europe

Chapter Seven: China

Chapter Eight: Japan

Chapter Nine: Southeast Asia

Chapter Ten: India

Chapter Eleven: Central and South America

Chapter Twelve: Profiles of International Players

Chapter Thirteen: Market Forecast 2020-2027

Chapter Fourteen: Analyst Views / Findings

Get Full Table of [emailprotected]:

https://www.reportspedia.com/report/technology-and-media/2015-2027-global-machine-learning-industry-market-research-report,-segment-by-player,-type,-application,-marketing-channel,-and-region/57400#table_of_contents

Thanks for reading this article; you can also get individual chapter wise section or region wise report versions like North America, Europe, and Asia.

Follow this link:
Global Machine Learning Market Tends To Show Steady Growth Post Pandemic With Regional Overview and Top Key Players - Verdant News

PREDICTING THE OPTIMUM PATH – Port Strategy

A joint venture has seen the implementation of machine learning at HHLAs Container Terminal Burchardkai to optimise import container yard positioning and reduce re-handling moves.

The elimination of costly re-handling moves of import containers has recently been the focus of a joint project between container terminal operator HHLA, its affi liate Hamburg Port Consulting (HPC) and INFORM the Artificial Intelligence (AI) systems supplier. Machine learning sits at the heart of the system.

Dwell time is the unit of time used to measure the period in which a container remains in a container terminal with this typically running from its arrival off a vessel until leaving the terminal via truck, rail or another vessel.

For import containers there is often no specific information available on the pick-up time when selecting a storage slot in the container stack. This can lead to an inefficient container storage location in the yard generating, in turn, the requirement for additional shuffle moves that require extra resources including maintenance and energy consumption.

To mitigate this operational inefficiency, the project partners - HHLA, HPC and INFORM - have recently run a pilot project at HHLAs Container Terminal Burchardkai (CTB) focused on machine learning technology with this applied in order to predict individual import container dwell times and thereby reduce costly re-handling/shuffle moves.

As a specialist in IT software integration and terminal operations, HPC employed the deep learning approach to identify hidden patterns from historical data of container moves at HHLA CTB. This was undertaken over a period of two years and with the acquired information processed into high quality data sets. Assessed by the Syncrotess Machine Learning Module from INFORM and validated by the HPC simulation tool, the results show a significant reduction of shuffle moves resulting in a reduced truck turn time.

PRODUCTIVE IMPLEMENTATION

Dr. Alexis Pangalos, Partner at HPC discussing the project highlights notes: It was a productive implementation of INFORMs Artificial Intelligence (AI) solution for the choice of container storage positions at CTB. The Machine Learning (ML) Module was trained with data from CTBs container handling operations and the outcome from this is a system tailor-made for HHLAs operations.

HPC together with INFORM have integrated the Syncrotess ML Module into the slot allocation algorithms already running within CTBs terminal control system, ITS.

PREDICTING DWELL TIME

INFORMs AI solution predicts the dwell time (i.e., the time period the container is expected to be stored in the yard) and the outbound mode of transport (e.g., rail, truck, vessel) both of which are crucial criteria for selecting an optimised container storage location within the yard. A location that avoids unnecessary re-handling.

Utilising machine learning and AI and integrating these technologies into existing IT infrastructure are the success factors for reaching the next level of optimisations, says Jens Hansen, Executive Board Member responsible for IT at HHLA. A detailed analysis, and a smooth interconnectivity between all different systems, enable the value of improved safety while reducing costs and greenhouse gas emissions, he underlines.

DETAILED DOMAIN KNOWLEDGE

Data availability and data processing are key elements when it comes to utilising AI technology, says Pangalos. It requires a detailed domain knowledge of terminal operations to unlock greater productivity of the terminal equipment and connected processes.

The implementation is based on a machine learning assessment INFORM undertook in 2018 whereby it set out to determine if they could improve optimisation and operational outcomes using INFORMs broader ML algorithms developed for use in other industries such as finance and aviation.

As of 2019, system results indicated a prediction accuracy of 26% for dwell time predictions and 33% for outbound mode of transport predictions.

Dr. Eva Savelsberg, Senior Vice President of INFORMs Logistic Division notes: AI and machine learning allows us to leverage data from our past performance to inform us about how best to approach our future operations our ML Module gives our Operations Research based algorithms the best footing for making complex decisions about what to do in the future.

INFORMs Machine Learning Module allows CTB to leverage insights generated from algorithms that continuously learn from historical data."

Further Information: Matthew Wittemeier m.wittemeier@inform-software.com

Visit link:
PREDICTING THE OPTIMUM PATH - Port Strategy

Global Machine Learning Courses Market Research Report 2015-2027 of Major Types, Applications and Competitive Vendors in Top Regions and Countries -…

Strategic growth, latest insights, developmental trends in Global & Regional Machine Learning Courses Market with post-pandemic situations are reflected in this study. End to end Industry analysis from the definition, product specifications, demand till forecast prospects are presented. The complete industry developmental factors, historical performance from 2015-2027 is stated. The market size estimation, Machine Learning Courses maturity analysis, risk analysis, and competitive edge is offered. The segmental market view by types of products, applications, end-users, and top vendors is stated. Market drivers, restraints, opportunities in Machine Learning Courses industry with the innovative and strategic approach is offered. Machine Learning Courses product demand across regions like North America, Europe, Asia-Pacific, South and Central America, Middle East, and Africa is analyzed. The emerging segments, CAGR, revenue accumulation, feasibility check is specified.

Know more about this report or browse reports of your interest here:https://www.reportscheck.com/shop/global-machine-learning-courses-market-research-report-2015-2027-of-major-types-applications-and-competitive-vendors-in-top-regions-and-countries/#sample-request

COVID-19 has greatly impacted different Machine Learning Courses segments causing disruptions in the supply chain, timely product deliveries, production processes, and more. Post pandemic era the Machine Learning Courses industry will emerge with completely new norms, plans and policies, and development aspects. There will be new risk factors involved along with sustainable business plans, production processes, and more. All these factors are deeply analyzed by Reports Check's domain expert analysts for offering quality inputs and opinions.

Check out the complete table of contents, segmental view of this industry research report:https://www.reportscheck.com/shop/global-machine-learning-courses-market-research-report-2015-2027-of-major-types-applications-and-competitive-vendors-in-top-regions-and-countries/#table-of-contents

The qualitative and quantitative information is formulated in Machine Learning Courses report. Region-wise or country-wise reports are exclusively available on clients' demand with Reports Check. The market size estimation, Machine Learning Courses industry's competition, production capacity is evaluated. Also, import-export details, pricing analysis, upstream raw material suppliers, and downstream buyers analysis is conducted.

Receive complete insightful information with past, present and forecast situations of Global Machine Learning Courses Market and Post-Pandemic Status. Our expert analyst team is closely monitoring the industry prospects and revenue accumulation. The report will answer all your queries as well as you can make a custom request with free sample report.

A full-fledged, comprehensive research technique is used to derive Machine Learning Courses market's quantitative information. The gross margin, Machine Learning Courses sales ratio, revenue estimates, profits, and consumer analysis is provided. The complete global Machine Learning Courses market size, regional, country-level market size, & segmentation-wise market growth and sales analysis are provided. Value chain optimization, trade policies, regulations, opportunity analysis map, & marketplace expansion, and technological innovations are stated. The study sheds light on the sales growth of regional and country-level Machine Learning Courses market.

The company overview, total revenue, Machine Learning Courses financials, SWOT analysis, and product launch events are specified. We offer competitor analysis under the competitive landscape section for every competitor separately. The report scope section provides in-depth analysis of overall growth, leading companies with their successful Machine Learning Courses marketing strategies, market contribution, recent developments, and historic and present status.

Segment 1: Describes Machine Learning Courses market overview with definition, classification, product picture, Machine Learning Courses specifications

Segment 2: Machine Learning Courses opportunity map, market driving forces, restraints, and risk analysis

Segment 3:Competitive landscape view, sales, revenue, gross margin, pricing analysis, and global market share analysis

Segment 4:Machine Learning Courses Industry fragments by key types, applications, top regions, countries, top companies/manufacturers and end-users

Segment 5:Regional level growth, sales, revenue, gross margin from 2015-2020

Segment 6,7,8:Country-level sales, revenue, growth, market share from 2015-2020

Segment 9:Market sales, size, and share by each product type, application, and regional demand with production and Machine Learning Courses volume analysis

Segment 10:Machine Learning Courses Forecast prospects situations with estimates revenue generation, share, growth rate, sales, demand, import-export, and more

Segment 11 & 12:Machine Learning Courses sales and marketing channels, distributor analysis, customers, research findings, conclusion, and analysts views and opinions

Click to know more about our company and service offerings:https://www.reportscheck.com/shop/global-machine-learning-courses-market-research-report-2015-2027-of-major-types-applications-and-competitive-vendors-in-top-regions-and-countries/

An efficient research technique with verified and reliable data sources is what makes us stand out of the crowd. Excellent business approach, diverse clientele, in-depth competitor analysis, and efficient planning strategy is what makes us stand out of the crowd. We cater to different factors like technological innovations, economic developments, R&D, and mergers and acquisitions are specified. Credible business tactics and extensive research is the key to our business which helps our clients in profitable business plans.

Contact Us:

Olivia Martin

Email: [emailprotected]

Website:www.reportscheck.com

Phone: +1(831)6793317

Read more here:
Global Machine Learning Courses Market Research Report 2015-2027 of Major Types, Applications and Competitive Vendors in Top Regions and Countries -...

When AI in healthcare goes wrong, who is responsible? – Quartz

Artificial intelligence can be used to diagnose cancer, predict suicide, and assist in surgery. In all these cases, studies suggest AI outperforms human doctors in set tasks. But when something does go wrong, who is responsible?

Theres no easy answer, says Patrick Lin, director of Ethics and Emerging Sciences Group at California Polytechnic State University. At any point in the process of implementing AI in healthcare, from design to data and delivery, errors are possible. This is a big mess, says Lin. Its not clear who would be responsible because the details of why an error or accident happens matters. That event could happen anywhere along the value chain.

Design includes creation of both hardware and software, plus testing the product. Data encompasses the mass of problems that can occur when machine learning is trained on biased data, while deployment involves how the product is used in practice. AI applications in healthcare often involve robots working with humans, which further blurs the line of responsibility.

Responsibility can be divided according to where and how the AI system failed, says Wendall Wallace, lecturer at Yale Universitys Interdisciplinary Center for Bioethics and the author of several books on robot ethics. If the system fails to perform as designed or does something idiosyncratic, that probably goes back to the corporation that marketed the device, he says. If it hasnt failed, if its being misused in the hospital context, liability would fall on who authorized that usage.

Surgical Inc., the company behind the Da Vinci Surgical system, has settled thousands of lawsuits over the past decade. Da Vinci robots always work in conjunction with a human surgeon, but the company has faced allegations of clear error, including machines burning patients and broken parts of machines falling into patients.

Some cases, though, are less clear-cut. If diagnostic AI trained on data that over-represents white patients then misdiagnoses a Black patient, its unclear whether the culprit is the machine-learning company, those who collected the biased data, or the doctor who chose to listen to the recommendation. If an AI program is a black box, it will make predictions and decisions as humans do, but without being able to communicate its reasons for doing so, writes attorney Yavar Bathaee in a paper outlining why the legal principles that apply to humans dont necessarily work for AI. This also means that little can be inferred about the intent or conduct of the humans that created or deployed the AI, since even they may not be able to foresee what solutions the AI will reach or what decisions it will make.

The difficulty in pinning the blame on machines lies in the impenetrability of the AI decision-making process, according to a paper on tort liability and AI published in the AMA Journal of Ethics last year. For example, if the designers of AI cannot foresee how it will act after it is released in the world, how can they be held tortiously liable?, write the authors. And if the legal system absolves designers from liability because AI actions are unforeseeable, then injured patients may be left with fewer opportunities for redress.

AI, as with all technology, often works very differently in the lab than in a real-world setting. Earlier this year, researchers from Google Health found that a deep-learning system capable of identifying symptoms of diabetic retinopathy with 90% accuracy in the lab caused considerable delays and frustrations when deployed in real life.

Despite the complexities, clear responsibility is essential for artificial intelligence in healthcare, both because individual patients deserve accountability, and because lack of responsibility allows mistakes to flourish. If its unclear whos responsible, that creates a gap, it could be no one is responsible, says Lin. If thats the case, theres no incentive to fix the problem. One potential response, suggested by Georgetown legal scholar David Vladeck, is to hold everyone involved in the use and implementation of the AI system accountable.

AI and healthcare often work well together, with artificial intelligence augmenting the decisions made by human professionals. Even as AI develops, these systems arent expected to replace nurses or automate human doctors entirely. But as AI improves, it gets harder for humans to go against machines decisions. If a robot is right 99% of the time, then a doctor could face serious liability if they make a different choice. Its a lot easier for doctors to go along with what that robot says, says Lin.

Ultimately, this means humans are ceding some authority to robots. There are many instances where AI outperforms humans, and so doctors should defer to machine learning. But patient wariness of AI in healthcare is still justified when theres no clear accountability for mistakes. Medicine is still evolving. Its part art and part science, says Lin. You need both technology and humans to respond effectively.

Read more here:
When AI in healthcare goes wrong, who is responsible? - Quartz

Twitter is looking into why its photo preview appears to favor white faces over Black faces – The Verge

Twitter it was looking into why the neural network it uses to generate photo previews apparently chooses to show white peoples faces more frequently than Black faces.

Several Twitter users demonstrated the issue over the weekend, posting examples of posts that had a Black persons face and a white persons face. Twitters preview showed the white faces more often.

The informal testing began after a Twitter user tried to post about a problem he noticed in Zooms facial recognition, which was not showing the face of a Black colleague on calls. When he posted to Twitter, he noticed it too was favoring his white face over his Black colleagues face.

Users discovered the preview algorithm chose non-Black cartoon characters as well.

When Twitter first began using the neural network to automatically crop photo previews, machine learning researchers explained in a blog post how they started with facial recognition to crop images, but found it lacking, mainly because not all images have faces:

Previously, we used face detection to focus the view on the most prominent face we could find. While this is not an unreasonable heuristic, the approach has obvious limitations since not all images contain faces. Additionally, our face detector often missed faces and sometimes mistakenly detected faces when there were none. If no faces were found, we would focus the view on the center of the image. This could lead to awkwardly cropped preview images.

Twitter chief design officer Dantley Davis tweeted that the company was investigating the neural network, as he conducted some unscientific experiments with images:

Liz Kelley of the Twitter communications team tweeted Sunday that the company had tested for bias but hadnt found evidence of racial or gender bias in its testing. Its clear that weve got more analysis to do, Kelley tweeted. Well open source our work so others can review and replicate.

Twitter chief technology officer Parag Agrawal tweeted that the model needed continuous improvement, adding he was eager to learn from the experiments.

Read more:
Twitter is looking into why its photo preview appears to favor white faces over Black faces - The Verge

Solving the crux behind Apple’s Silicon Strategy – Medium

In its latest keynote address headed by CEO Tim Cook, Apple its new A14 bionic chip, a 5 nm ARM based chipset.

This System on a Chip (SoC) from Apple is expected to power iPhone 12 and iPad Air (2020) models. The chipset integrates around 11.8 billion transistors.

For over a decade, Apples world-class silicon design team has been building and refining Apple SoCs. Using these designs Apple has been able to develop the latest iPhone, iPad and Apple Watch that are the industry leaders in terms of class and performance. In June of 2020, Apple announced that it will transition the Mac to its custom silicon to offer better technological performance.

Now, Apple Silicon is basically a processor made in-house akin to what is powering the iPhone and iPad family of devices. This ARM move will result in ditching their reliance on Intel chipsets for Future Macs. This transition to silicon will also establish a common architecture across all Apple products, making it far easier for developers to write and optimize their apps for the entire ecosystem. In fact, developers can now start focusing on updating their applications to take advantage of the enhanced capabilities of the Apple silicon.

Along with this Apple also introduced mac0S Big Sur earlier this year, which will be the next major macOS release (version 11.0) and includes technologies that will facilitate a smooth transition to the Apple silicon experience. This will be the first time where developers will be able to make their iOS and iPad OS apps available on the Mac without modifications. The Apple silicon powered Macs will offer industry leading performance per watt and higher performance GPUs. To help developers get accustomed to the new transition, Apple is also launching the Universal App QuickStart Program to guide developers through the entire transition.

Apple plans to ship the new Mac by the end of the year and complete the transition in about two years. This being said Apple will continue to release new versions for Intel-based Mac for years to come.

Apple has been explicit about how serious they are about machine learning-based SoC. Apple A14 includes second-generation machine learning accelerators in the CPU for 10 times faster machine learning calculations. The combination of the new Neural Engine, machine learning accelerators, advanced power management, unified memory architecture and the Apple high-performance GPU enables powerful on-device experiences for image recognition, natural language learning, analysing motion, and maybe a machine learning enabled GPS!

According to a recent patent application by Apple , they have been working on a technology that implements a system for estimating the device location based on a global positioning system consisting of a Global Navigation Satellite System (GNSS) satellite, and receives a set of parameters associated with the estimated position. The processor is further configured to apply the set of parameters and the estimated position to a machine learning model that has been trained on a position relative to the satellite. The estimated position and output of the model is then provided to a Kalman filter for more accurate location.

This technology may be significantly better than what a mobile device alone can perform in most non-aided mode(s) of operation. Apples patent to improve GPS in the upcoming 5G era might give them an advantage over existing resources.

Apples move to its own ARM chips comes just as the company unveils macOS version 11.0 (Big Sur). That means ARM based Mac computers will continue to run macOS instead of switching to iOS 14, similar to the approach taken with existing Windows laptops that use Qualcomm ARM based processors. Apple apparently has its hardware and software team working together, given that they have found a way for all their applications functioning seamless from day one of the launch, through Rosetta 2 acting as an emulator and a translator that will allow Intel-made apps to run on Silicon-powered devices.

Moreover, the Apple ecosystem acts as the catalyst for innovation in the company and is not limited to the hardware and software products, but also around its services.

Putting a foot forward in that direction is the Apple One Subscription.

Apple with its calm dignity, diligent market study and unflinching courage to innovate has taken its own time to come up with their strategic silicon move. Apple stayed focused on its long term goals instead of following the hype, trends and gimmicks set out by its competitors to gain customer attention. This ability to think differently is a driving force behind their success.

And owing to the current state of affairs Apple has played it relatively safe this year, sticking to their core offerings. We can expect an exciting iPhone, iMac and MacOS launch later this year.

Lets gear up for another round of innovation sponsored by Apple.

See original here:
Solving the crux behind Apple's Silicon Strategy - Medium

Algorithms may never really figure us out thank goodness – The Boston Globe

An unlikely scandal engulfed the British government last month. After COVID-19 forced the government to cancel the A-level exams that help determine university admission, the British education regulator used an algorithm to predict what score each student would have received on their exam. The algorithm relied in part on how the schools students had historically fared on the exam. Schools with richer children tended to have better track records, so the algorithm gave affluent students even those on track for the same grades as poor students much higher predicted scores. High-achieving, low-income pupils whose schools had not previously performed well were hit particularly hard. After threats of legal action and widespread demonstrations, the government backed down and scrapped the algorithmic grading process entirely. This wasnt an isolated incident: In the United States, similar issues plagued the International Baccalaureate exam, which used an opaque artificial intelligence system to set students' scores, prompting protests from thousands of students and parents.

These episodes highlight some of the pitfalls of algorithmic decision-making. As technology advances, companies, governments, and other organizations are increasingly relying on algorithms to predict important social outcomes, using them to allocate jobs, forecast crime, and even try to prevent child abuse. These technologies promise to increase efficiency, enable more targeted policy interventions, and eliminate human imperfections from decision-making processes. But critics worry that opaque machine learning systems will in fact reflect and further perpetuate shortcomings in how organizations typically function including by entrenching the racial, class, and gender biases of the societies that develop these systems. When courts and parole boards have used algorithms to forecast criminal behavior, for example, they have inaccurately identified Black defendants as future criminals more often than their white counterparts. Predictive policing systems, meanwhile, have led the police to unfairly target neighborhoods with a high proportion of non-white people, regardless of the true crime rate in those areas. Companies that have used recruitment algorithms have found that they amplify bias against women.

But there is an even more basic concern about algorithmic decision-making. Even in the absence of systematic class or racial bias, what if algorithms struggle to make even remotely accurate predictions about the trajectories of individuals' lives? That concern gains new support in a recent paper published in the Proceedings of the National Academy of Sciences. The paper describes a challenge, organized by a group of sociologists at Princeton University, involving 160 research teams from universities across the country and hundreds of researchers in total, including one of the authors of this article. These teams were tasked with analyzing data from the Fragile Families and Child Wellbeing Study, an ongoing study that measures various life outcomes for thousands of families who gave birth to children in large American cities around 2000. It is one of the richest data sets available to researchers: It tracks thousands of families over time, and has been used in more than 750 scientific papers.

The task for the teams was simple. They were given access to almost all of this data and asked to predict several important life outcomes for a sample of families. Those outcomes included the childs grade point average, their grit (a commonly used measure of passion and perseverance), whether the household would be evicted, the material hardship of the household, and whether the parent would lose their job.

The teams could draw on almost 13,000 predictor variables for each family, covering areas such as education, employment, income, family relationships, environmental factors, and child health and development. The researchers were also given access to the outcomes for half of the sample, and they could use this data to hone advanced machine-learning algorithms to predict each of the outcomes for the other half of the sample, which the organizers withheld. At the end of the challenge, the organizers scored the 160 submissions based on how well the algorithms predicted what actually happened in these peoples lives.

The results were disappointing. Even the best performing prediction models were only marginally better than random guesses. The models were rarely able to predict a students GPA, for example, and they were even worse at predicting whether a family would get evicted, experience unemployment, or face material hardship. And the models gave almost no insight into how resilient a child would become.

In other words, even having access to incredibly detailed data and modern machine learning methods designed for prediction did not enable the researchers to make accurate forecasts. The results of the Fragile Families Challenge, the authors conclude, with notable understatement, raise questions about the absolute level of predictive performance that is possible for some life outcomes, even with a rich data set.

Of course, machine learning systems may be much more accurate in other domains; this paper studied the predictability of life outcomes in only one setting. But the failure to make accurate predictions cannot be blamed on the failings of any particular analyst or method. Hundreds of researchers attempted the challenge, using a wide range of statistical techniques, and they all failed.

These findings suggest that we should doubt that big data can ever perfectly predict human behavior and that policymakers working in criminal justice policy and child-protective services should be especially cautious. Even with detailed data and sophisticated prediction techniques, there may be fundamental limitations on researchers' ability to make accurate predictions. Human behavior is inherently unpredictable, social systems are complex, and the actions of individuals often defy expectations.

And yet disappointing as this may be for technocrats and data scientists, it also suggests something reassuring about human potential. If life outcomes are not firmly pre-determined if an algorithm, given a set of past data points, cannot predict a persons trajectory then the algorithms limitations ultimately reflect the richness of humanitys possibilities.

Bryan Schonfeld and Sam Winter-Levy are PhD candidates in politics at Princeton University.

Link:
Algorithms may never really figure us out thank goodness - The Boston Globe