The 4 biggest science breakthroughs that Gen Z could live to see – The Next Web

The only difference between science fiction and science is patience. Yesterdays mainframes are todays smartphones and todays neural networks will be tomorrows androids. But long before any technology becomes reality, someone has to dream it into existence.

The worlds of science and technology are constantly in flux. Its impossible to tell what the future will bring. However we can make some educated guesses based on recent breakthroughs in the fields of nuclear physics, quantum computing, robotics, artificial intelligence, and Facebooks name change.

Lets set our time machines to January 28, 2100 to take an imaginary gander at the four most amazing science and technology breakthroughs the sort-of-far future has to offer.

This could very well be the most important technological breakthrough in human history.

The premise is simple: tiny machines that function at the cellular level capable of performing tissue repairs, destroying intruders, and delivering targeted nano-medications.

And this wouldnt necessarily mean filling your bloodstream with trillions of microscopic hunks of metal and silicon. Theres plenty of reason to believe scientists could take todays biological robots and turn them into artificial intelligence agents capable of executing code functions inside our bodies.

Imagine an AI swarm controlled by a bespoke neural network attached to our brain-computer-interfaces with the sole purpose of optimizing our biological functions.

We might not be able to solve immortality by 2100, but medical nanobots could go a long way towards bridging the gap.

Another technology thats sure to save innumerable human lives is fusion power. Luckily, were on the verge of solving that one already (at least in a rudimentary, proof-of-concept kind of way). With any luck, by the time Gen Zs grandkids are old enough to drive, well have advanced the technology to the point of abundance.

And thats when we can finallystart solving humanitys problems.

The big idea here is that well come close to perfecting fusion power in the future and, because of that, well be able to use quantum computers to optimize civilization.

Fusion could potentially be a limitless form of power and its theoretically feasible that we could eventually scale its energy-producing capabilities to such a degree that energy would be as ubiquitous for private and commercial use as air is.

Under such a paradigm, we can imagine a race to the top for scientific endeavor, the ultimate goal of which would be to produce a utopian society.

With near-infinite energy freely available, there would be little incentive to fight over resources and every incentive to optimize our existence.

And thats where quantum computers come in. If we can make classical algorithms learn to drive cars by building binary supercomputers, imagine what we could do with quantum supercomputing clusters harnessing the unbridled energy of entire stars.

We could assign algorithms to every living creature in the known universe and optimize for their existence. In essence, we could potentially solve the traveling salesman problem at the multiverse scale.

Admittedly, warp drives are a glamour technology. Technically-speaking, with Mars so nearby, we dont really have to travel beyond our own solar system.

But its well-documented that humanity has a need for speed. And if we ever have any intention of seeing stars other than Sol up close, were going to need spaceships that can travel really, really fast.

The big problem here is that the universe doesnt appear to allow anything to travel faster than light. And thats pretty slow. It would take us over four years to travel to the closest star to Earth. In galactic terms, thats like spending a 1/20th of your life walking to the neighbors house.

Warp drives could solve this. Instead of going faster, we could theoretically exploit the wackiness of the universe to go further in a given amount of time without increasing speed.

This involves shifting through warp bubbles in space with exotic temporal properties, but in essence its as simple as Einsteins observations that time works a bit differently at the edge of a black hole.

In the modern era, physicists are excited over some interesting equations and simulations that are starting to make the idea of warp drives seem less like science fiction and more like science.

An added benefit to the advent of the warp drive would be that it would exponentially increase the odds of humans discovering alien life.

If aliens arent right next door, then maybe theyre a few blocks over. If we can start firing probes beyond non-warp ranges by 2100, who knows what our long-range sensors will be able to detect?

Dont laugh. Its understandable if you dont think the metaverse belongs on this list. After all, its just a bunch of cartoon avatars and bad graphics that you need a VR headset for right?

But the metaverse of 2100 will be something different entirely. In 2022, Spotify tries to figure out what song you want to hear based on the music youve listened to in the past. In 2100, your brain-embedded AI assistant will know what song you want to hear because it has a direct connection to the area of your mind that processes sound, memory, and emotion.

The ideal metaverse would be a bespoke environment thats only indistinguishable from reality in its utopianism. In other words, youll only know its fake because you can control the metaverse.

While its obvious that jacking into the Matrix could pose a multitude of risks, the ability to take a vacation from reality could have positive implications ranging from treating depression to giving people with extremely low quality of life a reason to want to continue living.

The ultimate freedom is choosing your own reality. And its a safe bet that whoever owns the server it runs on is whos going to be in charge of the future.

Here is the original post:
The 4 biggest science breakthroughs that Gen Z could live to see - The Next Web

Posted in Uncategorized

The Eclipse Oniro Project aims to deliver consumer & IoT software that works across multiple platforms – CNX Software

Several of the embedded talks at FOSDEM 2022 mention the Eclipse Oniro Project. I had never heard about that project from the Eclipse Foundation, so lets see how they describe it:

Oniro is an Eclipse Foundation project focused on the development of a distributed open source operating system for consumer devices, regardless of the brand, model, make.

Oniro is a compatible implementation for the global market of OpenHarmony, an open source operating system specified and hosted by the OpenAtom Foundation.

Designed with modularity in mind, Oniro offers greater levels of flexibility and application portability across the broad spectrum of consumer and IoT devices from tiny embedded sensors and actuators, to feature rich smart appliances and mobile companions.

As a distributed and reusable collection of open source building blocks, Oniro enables compatibility with other open source technologies and ecosystems. Through close collaboration with projects and foundations such as OpenHarmony from the OpenAtom Foundation, Yocto project and OpenChain from the Linux Foundation, Oniro helps build bridges rather than creating a digital divide.

If OpenHarmony rings a bell its because its the open-source version of Huaweis HarmonyOS operating system, and is now managed by the OpenAtom Foundation. The description confuses me even more, and Im still not sure what it is for, but member companies and organizations include Linaro, SECO embedded systems company, as well as lesser-known companies like Synesthesia, and of course Huawei. So lets check the projects resources if we can find more details.

First, its fairly new as the working group was only established on October 26th, 2021 after a year of work from members. Weve got more clarity with regards to the goals as well:

The mission of the Eclipse Oniro Top-Level Project is the design, development, production and maintenance of an open source software platform, having an operating system, an ADK/SDK, standard APIs and basic applications, like UI, as core elements, targeting different industries thanks to a next generation multi-kernel architecture, that simplifies the existing landscape of complex systems, and its deployment across a wide range of devices.

So basically I understand Oniro aims to provide a vendor-agnostic platform to develop software that runs on various operating systems and hardware in order to reduce fragmentation in the consumer and IoT device industry. I will not insert an xkcd meme here, but you know what I mean. Right now, Oniro relies on the Poky/Yocto Project build system and supports three operating systems with Linux, ZephyrOS, and FreeRTOS allowing it to be used in application processors and microcontrollers.

The documentation lists seven hardware platforms supported by Oniro projects:

The Eclipse Oniro Project also integrates its various components into a representative use-case called a Blueprint, and at the time of writing there are five Blueprints:

The FOSDEM 2022 talk GPIO across Linux and Zephyr kernels by Bernhard Rosenkrnzer will showcase the Door Lock Blueprint and show how its possible to share code between a system using Zephyr with a Cortex-M, and another Linux on a Cortex-A. This code reuse should be beneficial since a simple piece of code can be fully tested and work for multiple platforms / operating systems, instead of having two separate trees where, for instance, one bug may be fixed in a tree, but not in the other tree.

Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.

Visit link:
The Eclipse Oniro Project aims to deliver consumer & IoT software that works across multiple platforms - CNX Software

Posted in Uncategorized

Five risks of moving your database to the cloud – MIT Technology Review

Moving to the cloud is all the rage. According to an IDC Survey Spotlight, Experience in Migrating Databases to the Cloud, 63% of enterprises are actively migrating their databases to the cloud, and another 29% are considering doing so within the next three years.

This article discusses some of the risks customers may unwittingly encounter when moving their database to a database as a service (DBaaS) in the cloud, especially when the DBaaS leverages open source database software such as Apache Cassandra, MariaDB, MySQL, Postgres, or Redis. At EDB, we classify these risks into five categories: support, service, technology stagnation, cost, and lock-in. Moving to the cloud without sufficient diligence and risk mitigation can lead to significant cost overruns and project delays, and more importantly, may mean that enterprises do not get the expected business benefits from cloud migration.

Because EDB focuses on the Postgres database, I will draw the specifics from our experiences with Postgres services, but the conclusions are equally valid for other open source database services.

Support risk. Customers running software for production applications need support, whether they run in the cloud or on premises. Support for enterprise-level software must cover two aspects: expert advice on how to use the product correctly, especially in challenging circumstances, and quickly addressing bugs and defects that impact production or the move to production.

For commercial software, a minimal level of support is bundled with the license. Open source databases dont come with a license. This opens the door for a cloud database provider to create and operate a database service without investing sufficiently in the open source community to address bugs and provide support.

Customers can evaluate a cloud database providers ability to support their cloud migration by checking the open source software release notes and identifying team members who actively participate in the project. For example, for Postgres, the release notes are freely available, and they name every individual who has contributed new features or bug fixes. Other open source communities follow similar practices.

Open source cloud database providers that are not actively involved in the development and bug fixing process cannot provide both aspects of supportadvice and rapid response to problemswhich presents a significant risk to cloud migration.

Service risk. Databases are complex software products. Many users need expert advice and hands-on assistance to configure databases correctly to achieve optimal performance and high availability, especially when moving from familiar on-premises deployments to the cloud. Cloud database providers that do not offer consultative and expert professional services to facilitate this move introduce risk into the process. Such providers ask the customer to assume the responsibilities of a general contractor and to coordinate between the DBaaS provider and potential professional services providers. Instead of a single entity they can consult to help them achieve a seamless deployment with the required performance and availability levels, they get caught in the middle, having to coordinate and mitigate issues between vendors.

Customers can reduce this risk by making sure they clearly understand who is responsible for the overall success of their deployment, and that this entity is indeed in a position to execute the entire project successfully.

Technology stagnation risk. The shared responsibility model is a key component of a DBaaS. While the user handles schema definition and query tuning, the cloud database provider applies minor version updates and major version upgrades. Not all providers are committed to upgrading in a timely mannerand some can lag significantly. At the time of this writing, one of the major Postgres DBaaS providers lags the open source community by almost three years in their deployment of Postgres versions. While DBaaS providers can selectively backport security fixes, a delayed application of new releases can put customers in a situation where they miss out on new database capabilities, sometimes for years. Customers need to inspect a providers historical track record of applying upgrades to assess this exposure.

A similar risk is introduced when a proprietary cloud database provider tries to create their own fork or version of well-known open source software. Sometimes this is done to optimize the software for the cloud environment or address license restrictions. Forked versions can deviate significantly from the better-known parent or fall behind the open source version. Well-known examples of such forks or proprietary versions are Aurora Postgres (a Postgres derivative), Amazon DocumentDB (with MongoDB compatibility), and Amazon OpenSearch Service (originally derived from Elasticsearch).

Users need to be careful when adopting cloud-specific versions or forks of open source software. Capabilities can deviate over time, and the cloud database provider may or may not adopt the new capabilities of the open source version.

Cost risk. Leading cloud database services have not experienced meaningful direct price increases. However, there is a growing understanding that the nature of cloud services can drive significant cost risk, especially in the case of self-service and rapid elasticity combined with an intransparent cost model. In on-premises environments, database administrators (DBAs) and developers must optimize code to achieve performance with the available hardware. In the cloud, it can be much more expedient to ask the cloud provider to increase provisioned input/output operations per second (IOPS), compute, or memory to optimize performance. As each increase instance drives up cost, such a short-term fix is likely to have long-lasting negative cost impacts.

Users mitigate the cost risk in two ways: (1) close supervision of the increases of IOPS, CPU, and memory to make sure they are balanced against the cost of application optimization; (2) scrutiny of the cost models of DBaaS providers to identify and avoid vendors with complex and unpredictable cost models.

Lock-in risk. Cloud database services can create a Hotel California effect, where data cannot easily leave the cloud again, in several ways. While data egress cost is often mentioned, general data gravity and the integration with other cloud-specific tools for data management and analysis are more impactful. Data gravity is a complex concept that, at a high level, purports that once a business data set is available on a cloud platform, more applications likely will be deployed using the data on that platform, which in turn makes it less likely that the data can be moved elsewhere without significant business impact.

Cloud-specific tools are also a meaningful driver for lock-in. All cloud platforms provide convenient and proprietary data management and analysis tools. While they help derive business value quickly, they also create lock-in.

Users can mitigate the cloud lock-in effect by carefully avoiding the use of proprietary cloud tools and by making sure they only use DBaaS solutions that support efficient data replication to other clouds.

Planning for risk. Moving databases to the cloud is undoubtedly a target for many organizations, but doing so is not risk-free. Businesses need to fully investigate and understand potential weaknesses of cloud database providers in the areas of support, services, technology stagnation, cost, and lock-in. While these risks are not a reason to shy away from the cloud, its important to address them up front, and to understand and mitigate them as part of a carefully considered cloud migration strategy.

This content was produced by EDB. It was not written by MIT Technology Reviews editorial staff.

See the original post here:
Five risks of moving your database to the cloud - MIT Technology Review

Posted in Uncategorized

Los Angeles Is Making Good on Its Promise to Ban Oil and Gas Wells – Gizmodo

Image: : Santi Visalli (Getty Images)

Los Angeles is finally turning the page on the citys oil-coated past. In unanimously approved measures, the Los Angeles City Council voted in favor of banning new oil and gas wells and outlined a plan to phase out those already existing over five years.

The vote marks a huge victory after activists waged a years-longeffort to get the city to clean up its act. It will provide a huge benefit to communities of color that often live in the shadow of the citys most polluting sites. The approved measures will see the city draft ordinances to prohibit new oil and gas extraction, hire experts to analyze how to phase out remaining wells throughout the city, and create a framework for plugging wells left abandoned.

This is a momentous step forward for Los Angeles, and a clear message we are sending to Big Oil, Councilman Mitch OFarrell, whos also the chair of the City Councils Energy, Climate Change, Environmental Justice, and River committee, said in a statement. These actions are critical to our ambitious LA100' efforts, which will achieve 100% carbon-free energy in Los Angeles by 2035.

The decision came as welcome news to Los Angeles activists who have raised the alarm about oil wells hidden throughout the city for years. The wells are a public health ill. People and families living near extraction sites are regularly exposed toair pollution that is correlated with everything from headaches and skin conditions to spontaneous preterm births and respiratory illness.

Starting today, I have a little bit more hope for our communities, Ashley Hernandez, an organizer with Communities for a Better Environment,told the Associated Press. Our futures will hopefully not be full of emergency room visits, bloody noses, or burdensome health impacts, but a cleaner future where black and brown families are the ones protected and valued.

Big Oil is, perhaps not surprisingly, big mad about the new laws. The California Independent Petroleum Association, a group representing more than 400 oil and gas entities, sent a letter to NBC Los Angeles in which it claimed the new efforts would devastate the viability of the city of Los Angeles; and eliminate thousands of jobs. This is despite the fact that the approved ordinance will develop a jobs program to help transition some of these workers to other industries. Plugging abandoned wells could also be a prime source of employment for oil and gas workers as the world winds the industry down while also generating billions in public health and climate benefits.

In an interview with the AP though, CIPA CEO Rock Zierma went as far as to call the citys new measures illegal.

Taking someones property without compensation, particularly one which is duly permitted and highly regulated, is illegal and violates the U.S. Constitutions 5th Amendment against illegal search and seizure, Zierman said.

Los Angeles has a long history of oil production. That legacy lives on in the well sites scattered across even busy parts of the city, including some hidden behind building facades. As of 2021, the city was still home to around 1,000 wells. With the new ordinance, those wells will be relegated to history and residents will be able to breathe a little easier.

View post:

Los Angeles Is Making Good on Its Promise to Ban Oil and Gas Wells - Gizmodo

Posted in Uncategorized

New Justice Is Unlikely to Thwart Supreme Courts Rightward Lurch – The New York Times

WASHINGTON Justice Stephen G. Breyers successor at the Supreme Court may turn out to possess a blazing intellect, infectious charm and fresh liberal perspectives. But there is no reason to think the new justice will be able to slow the courts accelerating drive to the right.

Indeed, the courts trajectory may have figured in Justice Breyers retirement calculations, said Kate Shaw, a professor at the Benjamin N. Cardozo School of Law. Theres a good chance, she said, that the dynamics on the current court both the speed and magnitude of the change thats coming had some impact on Breyers decision to go now.

He may have figured, she suggested, that someone else might as well try to stand in the way of a juggernaut committed to fulfilling, and fast, the conservative legal movements wish list in cases on abortion, guns, race, religion and voting.

In a letter to President Biden on Thursday, Justice Breyer, 83, said he would step down at the end of the Supreme Courts current term, in June or July, if his successor has been confirmed by then. But that liberal-for-liberal swap will do nothing to alter the power and ambitions of the courts six-member conservative supermajority.

Its members, all appointed by Republican presidents, seem largely unconcerned about a sharp dip in the courts public approval, caustic criticism from the liberal justices or the possibility that Congress could add seats or otherwise alter the courts structure. Facing no perceived headwinds, the conservative majority seems ready to go for broke.

This is a court in a hurry, said Stephen I. Vladeck, a law professor at the University of Texas at Austin.

The shape and speed of the courts conservative agenda have come into focus in the last six months.

Most notably, the court repeatedly refused to block a Texas law that bans most abortions after six weeks. The law is flatly at odds with Roe v. Wade, the 1973 decision that established a constitutional right to abortion and prohibited states from banning the procedure until fetal viability, around 23 weeks.

The court also repeatedly thwarted initiatives by the Biden administration to address the coronavirus pandemic, blocking an eviction moratorium and a vaccine-or-testing mandate for large employers. And it refused to block a lower-court ruling requiring the administration to reinstate a Trump-era immigration program that forces asylum seekers arriving at the southwestern border to await approval in Mexico.

Although there was no split in the lower courts, the usual key criterion for Supreme Court review, the justices agreed to decide whether to overrule Roe entirely in a case from Mississippi and whether to do away with affirmative action in higher education in cases concerning Harvard and the University of North Carolina. In that last case, the appeals court had not even ruled yet.

It is no surprise that conservative justices vote for conservative outcomes. But the pace of change, often accompanied by procedural shortcuts, is harder to explain.

The three newest justices, all appointed by President Donald J. Trump, are 50 to 56 years old. If they serve as long as Justice Breyer, they will be on the court for another quarter-century or so. They have plenty of time.

Nor do there seem to be looming departures among the other conservatives. The oldest, Justice Clarence Thomas, is 73, a decade younger than Justice Breyer, and lately, he has been particularly engaged in the courts work.

He has been an active participant in oral arguments, for instance, a change from earlier in his tenure, when he once went for a decade without asking a question from the bench.

The six-justice conservative majority seems built to last.

Still, two of the last four vacancies at the court were created by deaths those of Justice Antonin Scalia in 2016 and Justice Ruth Bader Ginsburg in 2020.

Maybe theres some sense that these majorities can be fleeting, Professor Shaw said, so you do as much as you can as quickly as you can because who knows what the future holds.

When the case on overruling Roe was argued in December, the courts three liberal members sounded dismayed if not distraught at the prospect of such a stark shift so soon after a change in the courts membership. Justice Ginsburg, a liberal icon, was replaced by a conservative, Justice Amy Coney Barrett, Mr. Trumps third appointee to the court.

Will this institution survive the stench that this creates in the public perception that the Constitution and its reading are just political acts? Justice Sonia Sotomayor asked.

The courts conservative wing seemed unmoved. Indeed, its five most conservative members seemed to have little interest in a more incremental position sketched out by Chief Justice John G. Roberts Jr., who suggested that the court could uphold the Mississippi law at issue, which bans most abortions after 15 weeks, and leave it at that for now.

Professor Shaw said some members of the court may have been emboldened by the lack of a sustained national outcry over the Texas abortion law.

Theyve dipped their toe in the water of essentially ending Roe in the second-most populous state in the nation, she said. They may well have drawn the conclusion that any backlash to the overruling of Roe would be somewhat muted or short-lived and would not create an existential threat to the court.

The inconclusive report issued by Mr. Bidens commission on potential changes to the courts processes and structure may also have given the courts conservative majority confidence that it had nothing to fear from the other branches.

When the Biden commission came back with its report, that just further took the brakes off, Professor Vladeck said. This is not 1937.

That was the year that President Franklin D. Roosevelt introduced what came to be known as his court-packing plan. It failed in the immediate sense: The number of justices stayed steady at nine. But it seemed to exert pressure on the court, which began to uphold progressive New Deal legislation.

There appears to be no comparable pressure now, Professor Vladeck said. This is a court that is not shy, that is not afraid of its shadow and is not remotely worried about doing anything to provoke Congress, he said.

The court has lately been creative in using unusual procedures to generate fast results.

In recent years, for instance, it has done some of its most important work on what critics call its shadow docket, in which the court decides emergency applications on a very quick schedule without full briefing and oral argument, often in a terse ruling issued late at night.

Probably in response to criticism of that practice, the court has this term started to hear arguments in important cases that had arrived at the court as emergency applications, including ones on the death penalty, the Texas abortion law and two Biden administration programs requiring or encouraging vaccination against the coronavirus.

The court has also begun to use another procedural device to allow it to rule quickly, agreeing to hear cases before appeals courts have even issued decisions.

The procedure, certiorari before judgment, used to be exceedingly rare, seemingly reserved for national crises like President Richard M. Nixons refusal to turn over tape recordings to a special prosecutor or President Harry S. Trumans seizure of the steel industry.

Until early 2019, the court had not used the procedure in 14 years. Since then, Professor Vladeck found, it has used it 14 times.

This is a court that is not afraid of dusting off obscure procedures and disfavored paths to review, he said, if that allows it to reach decisions more quickly.

It all ends up in the same place, he said, which is increasing the power of the court.

Visit link:

New Justice Is Unlikely to Thwart Supreme Courts Rightward Lurch - The New York Times

Posted in Uncategorized

We are not going to change it: Shadow Minister for Financial Services on broker remuneration – The Adviser – The Adviser

Annie Kane08:57 AM, 28 Jan 20227 minute readAnnie Kane08:57 AM, 28 Jan 20227 minute read

The existing arrangements for broker remuneration would remain under a Labor government,Stephen Jones MP, the shadow minister for financial services and superannuation, has confirmed.

Speaking at an event hosted by PritchittBland Communications in Sydney on Thursday (27 January), the shadow assistant treasurer and the shadow minister for financial services and superannuation, Stephen Jones MP, confirmed that the Australian Labor Party had no intentions of changing the current broker remuneration structure and that commissioner Kenneth Hayne probably wasn't right on his recommendation to move to a consumer-pays model.

The final report for the Royal Commission into Misconduct in the Banking, Superannuation and Financial Services Industry recommended that changes in brokers remuneration should be made over a period of two or three years, by first prohibiting lenders from paying trail commission to mortgage brokers in respect of new loans, then prohibiting lenders from paying other commissions to mortgage brokers.

In its official response to the royal commission in 2019, the Labor Party had originally proposed banning trail commissions paid to mortgage brokers and capping upfront commissions at 1.1 per cent.

However, the shadow assistant treasurer and member for Whitlam has confirmed that the party no longer supports changing broker remuneration.

When asked by The Adviser how the ALP would treat broker remuneration should it win the upcoming federal election, Mr Jones responded: We think thats a settled issue; the existing arrangements are a settled issue.

We think the existing arrangement should stand. Were not going to change it...

We think Hayne probably wasnt right on that. He might have had it theoretically right, but when you look at the operation of the industry, and how it works in Australia and the fact that well over 60 per cent of residential mortgages are now being written through brokers, youd be doing enormous damage to the settled state of affairs to adopt the Hayne royal commission [recommendation on broker remuneration].

The comments from the shadow minister for financial services and superannuation are pertinent given the upcoming federal election, which is expected to be held in May.

Neither Labor nor the current Coalition government has voiced intentions to radically change broker remuneration in recent months.

While the Morrison government had initially said in its official response to the final report in 2019 that it would ban trail commission payments for new mortgages from 1 July 2020, the Treasurer later revealed that the role of upfront and trail commissions would instead be reviewed in 2022.

Treasurer Josh Frydenberg recently suggested to Momentum Media that such a review would be held in the back half of the year.

Members of industry, including associations, aggregators and individual brokers have been busy engaging with politicians and outlining the impacts that changing broker remuneration would have on industry.

In a recent video update to members, the chief executive of the Mortgage & Finance Association of Australia (MFAA), Mike Felton, said that the association had undertaken a significant amount of work on the 2022 review last year, in preparation for the upcoming review, adding that brokers were in an exceptionally strong position to face it.

Appetite to move to a consumer-pays model is also low among existing broker clients, with research from the Finance Brokers Association of Australia (FBAA) recently finding that the vast majority of broker clients are not concerned with brokers receiving commissions, and less than a third would pay a fee for service.

The survey echoes similar findings made by Momentum Intelligence in 2019, when the inaugural Consumer Access to Mortgages report found that nearly two-thirds of borrowers (58 per cent) said they would not be willing to pay a broker a fee, while a whopping 96.5 per cent of broker clients said they wouldnt be willing to pay $2,000 for the service.

[Related: Broking industry poll launches ahead of federal election]

We are not going to change it: Shadow minister for financial services on broker remuneration

Grow your business exponentially in 2022!

Discover the right strategies to build a more structured, efficient and profitable businesses at The Advisers 2022 Business Accelerator Program.

Visit the website here to secure your ticket.

Annie Kane is the editor of The Adviser and Mortgage Business.

As well as writingabout the Australian broking industry, the mortgage market, financial regulation, fintechs and the wider lendinglandscape Annie is also the host of the Elite Broker and In Focus podcasts and The Adviser Live webcasts.

Email Annieat: This email address is being protected from spambots. You need JavaScript enabled to view it.

Continued here:

We are not going to change it: Shadow Minister for Financial Services on broker remuneration - The Adviser - The Adviser

Posted in Uncategorized

What is Machine Learning? | IBM

This introduction to machine learning provides an overview of its history, important definitions, applications and concerns within businesses today.

Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.

IBM has a rich history with machine learning. One of its own, Arthur Samuel, is credited for coining the term, machine learning with his research (PDF, 481 KB) (link resides outside IBM) around the game of checkers. Robert Nealey, the self-proclaimed checkers master, played the game on an IBM 7094 computer in 1962, and he lost to the computer. Compared to what can be done today, this feat almost seems trivial, but its considered a major milestone within the field of artificial intelligence. Over the next couple of decades, the technological developments around storage and processing power will enable some innovative products that we know and love today, such as Netflixs recommendation engine or self-driving cars.

Machine learning is an important component of the growing field of data science. Through the use of statistical methods, algorithms are trained to make classifications or predictions, uncovering key insights within data mining projects. These insights subsequently drive decision making within applications and businesses, ideally impacting key growth metrics. As big data continues to expand and grow, the market demand for data scientists will increase, requiring them to assist in the identification of the most relevant business questions and subsequently the data to answer them.

Since deep learning and machine learning tend to be used interchangeably, its worth noting the nuances between the two. Machine learning, deep learning, and neural networks are all sub-fields of artificial intelligence. However, deep learning is actually a sub-field of machine learning, and neural networks is a sub-field of deep learning.

The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman notes in this MIT lecture (01:08:05) (link resides outside IBM). Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the set of features to understand the differences between data inputs, usually requiring more structured data to learn.

"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the set of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways. Deep learning and neural networks are primarily credited with accelerating progress in areas, such as computer vision, natural language processing, and speech recognition.

Neural networks, or artificial neural networks (ANNs), are comprised of a node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network. The deep in deep learning is just referring to the depth of layers in a neural network. A neural network that consists of more than three layerswhich would be inclusive of the inputs and the outputcan be considered a deep learning algorithm or a deep neural network. A neural network that only has two or three layers is just a basic neural network.

See the blog post AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: Whats the Difference? for a closer look at how the different concepts relate.

UCBerkeley (link resides outside IBM) breaks out the learning system of a machine learning algorithm into three main parts.

Machine learningclassifiers fall into three primary categories.

Supervised learning, also known as supervised machine learning, is defined by its use of labeled datasets to train algorithms that to classify data or predict outcomes accurately. As input data is fed into the model, it adjusts its weights until the model has been fitted appropriately. This occurs as part of the cross validation process to ensure that the model avoids overfitting or underfitting. Supervised learning helps organizations solve for a variety of real-world problems at scale, such as classifying spam in a separate folder from your inbox. Some methods used in supervised learning include neural networks, nave bayes, linear regression, logisticregression, random forest, support vector machine (SVM), and more.

Unsupervised learning, also known as unsupervised machine learning, uses machine learning algorithms to analyze and cluster unlabeled datasets. These algorithms discover hidden patterns or data groupings without the need for human intervention. Its ability to discover similarities and differences in information make it the ideal solution for exploratory data analysis, cross-selling strategies, customer segmentation, image and pattern recognition. Its also used to reduce the number of features in a model through the process of dimensionality reduction; principal component analysis (PCA) and singular value decomposition (SVD) are two common approaches for this. Other algorithms used in unsupervised learning include neural networks, k-means clustering, probabilistic clustering methods, and more.

Semi-supervised learning offers a happy medium between supervised and unsupervised learning. During training, it uses a smaller labeled data set to guide classification and feature extraction from a larger, unlabeled data set. Semi-supervised learning can solve the problem of having not enough labeled data (or not being able to afford to label enough data) to train a supervised learning algorithm.

For a deep dive into the differences between these approaches, check out "Supervised vs. Unsupervised Learning: What's the Difference?"

Reinforcement machine learning is a behavioral machinelearning model that is similar to supervised learning, but the algorithm isnt trained using sample data. This model learns as it goes by using trial and error. A sequence of successful outcomes will be reinforced to develop the best recommendation or policy for a given problem.

The IBM Watson system that won the Jeopardy! challenge in 2011 makes a good example. The system used reinforcement learning to decide whether to attempt an answer (or question, as it were), which square to select on the board, and how much to wagerespecially on daily doubles.

Learn more about reinforcement learning.

Here are just a few examples of machine learning you might encounter every day:

Speech recognition: It is also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, and it is a capability which uses natural language processing (NLP) to process human speech into a written format. Many mobile devices incorporate speech recognition into their systems to conduct voice searche.g. Sirior provide more accessibility around texting.

Customer service: Online chatbots are replacing human agents along the customer journey. They answer frequently asked questions (FAQs) around topics, like shipping, or provide personalized advice, cross-selling products or suggesting sizes for users, changing the way we think about customer engagement across websites and social media platforms. Examples include messaging bots on e-commerce sites with virtual agents, messaging apps, such as Slack and Facebook Messenger, and tasks usually done by virtual assistants and voice assistants.

Computer vision: This AI technology enables computers and systems to derive meaningful information from digital images, videos and other visual inputs, and based on those inputs, it can take action. This ability to provide recommendations distinguishes it from image recognition tasks. Powered by convolutional neural networks, computer vision has applications within photo tagging in social media, radiology imaging in healthcare, and self-driving cars within the automotive industry.

Recommendation engines: Using past consumption behavior data, AI algorithms can help to discover data trends that can be used to develop more effective cross-selling strategies. This is used to make relevant add-on recommendations to customers during the checkout process for online retailers.

Automated stock trading: Designed to optimize stock portfolios, AI-driven high-frequency trading platforms make thousands or even millions of trades per day without human intervention.

As machine learning technology advances, it has certainly made our lives easier. However, implementing machine learning within businesses has also raised a number of ethical concerns surrounding AI technologies. Some of these include:

While this topic garners a lot of public attention, many researchers are not concerned with the idea of AI surpassing human intelligence in the near or immediate future. This is also referred to as superintelligence, which Nick Bostrum defines as any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. Despite the fact that Strong AI and superintelligence is not imminent in society, the idea of it raises some interesting questions as we consider the use of autonomous systems, like self-driving cars. Its unrealistic to think that a driverless car would never get into a car accident, but who is responsible and liable under those circumstances? Should we still pursue autonomous vehicles, or do we limit the integration of this technology to create only semi-autonomous vehicles which promote safety among drivers? The jury is still out on this, but these are the types of ethical debates that are occurring as new, innovative AI technology develops.

While a lot of public perception around artificial intelligence centers around job loss, this concern should be probably reframed. With every disruptive, new technology, we see that the market demand for specific job roles shift. For example, when we look at the automotive industry, many manufacturers, like GM, are shifting to focus on electric vehicle production to align with green initiatives. The energy industry isnt going away, but the source of energy is shifting from a fuel economy to an electric one. Artificial intelligence should be viewed in a similar manner, where artificial intelligence will shift the demand of jobs to other areas. There will need to be individuals to help manage these systems as data grows and changes every day. There will still need to be resources to address more complex problems within the industries that are most likely to be affected by job demand shifts, like customer service. The important aspect of artificial intelligence and its effect on the job market will be helping individuals transition to these new areas of market demand.

Privacy tends to be discussed in the context of data privacy, data protection and data security, and these concerns have allowed policymakers to make more strides here in recent years. For example, in 2016, GDPR legislation was created to protect the personal data of people in the European Union and European Economic Area, giving individuals more control of their data. In the United States, individual states are developing policies, such as the California Consumer Privacy Act (CCPA), which require businesses to inform consumers about the collection of their data. This recent legislation has forced companies to rethink how they store and use personally identifiable data (PII). As a result, investments within security have become an increasing priority for businesses as they seek to eliminate any vulnerabilities and opportunities for surveillance, hacking, and cyberattacks.

Instances of bias and discrimination across a number of intelligent systems have raised many ethical questions regarding the use of artificial intelligence. How can we safeguard against bias and discrimination when the training data itself can lend itself to bias? While companies typically have well-meaning intentions around their automation efforts, Reuters (link resides outside IBM) highlights some of the unforeseen consequences of incorporating AI into hiring practices. In their effort to automate and simplify a process, Amazon unintentionally biased potential job candidates by gender for open technical roles, and they ultimately had to scrap the project. As events like these surface, Harvard Business Review (link resides outside IBM) has raised other pointed questions around the use of AI within hiring practices, such as what data should you be able to use when evaluating a candidate for a role.

Bias and discrimination arent limited to the human resources function either; it can be found in a number of applications from facial recognition software to social media algorithms.

As businesses become more aware of the risks with AI, theyve also become more active this discussion around AI ethics and values. For example, last year IBMs CEO Arvind Krishna shared that IBM has sunset its general purpose IBM facial recognition and analysis products, emphasizing that IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.

To read more about this, check out IBMs policy blog, relaying its point of view on A Precision Regulation Approach to Controlling Facial Recognition Technology Exports.

Since there isnt significant legislation to regulate AI practices, there is no real enforcement mechanism to ensure that ethical AI is practiced. The current incentives for companies to adhere to these guidelines are the negative repercussions of an unethical AI system to the bottom line. To fill the gap, ethical frameworks have emerged as part of a collaboration between ethicists and researchers to govern the construction and distribution of AI models within society. However, at the moment, these only serve to guide, and research (link resides outside IBM) (PDF, 1 MB) shows that the combination of distributed responsibility and lack of foresight into potential consequences isnt necessarily conducive to preventing harm to society.

To read more on IBM's position around AI Ethics, read more here.

IBM Watson Studio on IBM Cloud Pak for Data supports the end-to-end machine learning lifecycle on a data and AI platform. You can build, train and manage machine learning models wherever your data lives and deploy them anywhere in your hybrid multicloud environment.

To get started, sign up for an IBMid and create your IBM Cloud account.

Excerpt from:
What is Machine Learning? | IBM

Posted in Uncategorized

Machine Learning Tutorial | Machine Learning with Python …

Machine Learning tutorial provides basic and advanced concepts of machine learning. Our machine learning tutorial is designed for students and working professionals.

Machine learning is a growing technology which enables computers to learn automatically from past data. Machine learning uses various algorithms for building mathematical models and making predictions using historical data or information. Currently, it is being used for various tasks such as image recognition, speech recognition, email filtering, Facebook auto-tagging, recommender system, and many more.

This machine learning tutorial gives you an introduction to machine learning along with the wide range of machine learning techniques such as Supervised, Unsupervised, and Reinforcement learning. You will learn about regression and classification models, clustering methods, hidden Markov models, and various sequential models.

In the real world, we are surrounded by humans who can learn everything from their experiences with their learning capability, and we have computers or machines which work on our instructions. But can a machine also learn from experiences or past data like a human does? So here comes the role of Machine Learning.

Machine Learning is said as a subset of artificial intelligence that is mainly concerned with the development of algorithms which allow a computer to learn from the data and past experiences on their own. The term machine learning was first introduced by Arthur Samuel in 1959. We can define it in a summarized way as:

With the help of sample historical data, which is known as training data, machine learning algorithms build a mathematical model that helps in making predictions or decisions without being explicitly programmed. Machine learning brings computer science and statistics together for creating predictive models. Machine learning constructs or uses the algorithms that learn from historical data. The more we will provide the information, the higher will be the performance.

A machine has the ability to learn if it can improve its performance by gaining more data.

A Machine Learning system learns from historical data, builds the prediction models, and whenever it receives new data, predicts the output for it. The accuracy of predicted output depends upon the amount of data, as the huge amount of data helps to build a better model which predicts the output more accurately.

Suppose we have a complex problem, where we need to perform some predictions, so instead of writing a code for it, we just need to feed the data to generic algorithms, and with the help of these algorithms, machine builds the logic as per the data and predict the output. Machine learning has changed our way of thinking about the problem. The below block diagram explains the working of Machine Learning algorithm:

The need for machine learning is increasing day by day. The reason behind the need for machine learning is that it is capable of doing tasks that are too complex for a person to implement directly. As a human, we have some limitations as we cannot access the huge amount of data manually, so for this, we need some computer systems and here comes the machine learning to make things easy for us.

We can train machine learning algorithms by providing them the huge amount of data and let them explore the data, construct the models, and predict the required output automatically. The performance of the machine learning algorithm depends on the amount of data, and it can be determined by the cost function. With the help of machine learning, we can save both time and money.

The importance of machine learning can be easily understood by its uses cases, Currently, machine learning is used in self-driving cars, cyber fraud detection, face recognition, and friend suggestion by Facebook, etc. Various top companies such as Netflix and Amazon have build machine learning models that are using a vast amount of data to analyze the user interest and recommend product accordingly.

Following are some key points which show the importance of Machine Learning:

At a broad level, machine learning can be classified into three types:

Supervised learning is a type of machine learning method in which we provide sample labeled data to the machine learning system in order to train it, and on that basis, it predicts the output.

The system creates a model using labeled data to understand the datasets and learn about each data, once the training and processing are done then we test the model by providing a sample data to check whether it is predicting the exact output or not.

The goal of supervised learning is to map input data with the output data. The supervised learning is based on supervision, and it is the same as when a student learns things in the supervision of the teacher. The example of supervised learning is spam filtering.

Supervised learning can be grouped further in two categories of algorithms:

Unsupervised learning is a learning method in which a machine learns without any supervision.

The training is provided to the machine with the set of data that has not been labeled, classified, or categorized, and the algorithm needs to act on that data without any supervision. The goal of unsupervised learning is to restructure the input data into new features or a group of objects with similar patterns.

In unsupervised learning, we don't have a predetermined result. The machine tries to find useful insights from the huge amount of data. It can be further classifieds into two categories of algorithms:

Reinforcement learning is a feedback-based learning method, in which a learning agent gets a reward for each right action and gets a penalty for each wrong action. The agent learns automatically with these feedbacks and improves its performance. In reinforcement learning, the agent interacts with the environment and explores it. The goal of an agent is to get the most reward points, and hence, it improves its performance.

The robotic dog, which automatically learns the movement of his arms, is an example of Reinforcement learning.

Before some years (about 40-50 years), machine learning was science fiction, but today it is the part of our daily life. Machine learning is making our day to day life easy from self-driving cars to Amazon virtual assistant "Alexa". However, the idea behind machine learning is so old and has a long history. Below some milestones are given which have occurred in the history of machine learning:

Now machine learning has got a great advancement in its research, and it is present everywhere around us, such as self-driving cars, Amazon Alexa, Catboats, recommender system, and many more. It includes Supervised, unsupervised, and reinforcement learning with clustering, classification, decision tree, SVM algorithms, etc.

Modern machine learning models can be used for making various predictions, including weather prediction, disease prediction, stock market analysis, etc.

Before learning machine learning, you must have the basic knowledge of followings so that you can easily understand the concepts of machine learning:

Our Machine learning tutorial is designed to help beginner and professionals.

We assure you that you will not find any difficulty while learning our Machine learning tutorial. But if there is any mistake in this tutorial, kindly post the problem or error in the contact form so that we can improve it.

View original post here:
Machine Learning Tutorial | Machine Learning with Python ...

Posted in Uncategorized

Machine Learning: Definition, Explanation, and Examples

Machine learning has become an important part of our everyday lives and is used all around us. Data is key to our digital age, and machine learning helps us make sense of data and use it in ways that are valuable. Similarly, automation makes business more convenient and efficient. Machine learning makes automation happen in ways that are consumable for business leaders and IT specialists.

Machine learning is vital as data and information get more important to our way of life. Processing is expensive, and machine learning helps cut down on costs for data processing. It becomes faster and easier to analyze large, intricate data sets and get better results. Machine learning can additionally help avoid errors that can be made by humans. Machine learning allows technology to do the analyzing and learning, making our life more convenient and simple as humans. As technology continues to evolve, machine learning is used daily, making everything go more smoothly and efficiently. If youre interested in IT, machine learning and AI are important topics that are likely to be part of your future. The more you understand machine learning, the more likely you are to be able to implement it as part of your future career.

If you're interested in a future in machine learning, the best place to start is with an online degree from WGU. An online degree allows you to continue working or fulfilling your responsibilities while you attend school, and for those hoping to go into IT this is extremely valuable. You can earn while you learn, moving up the IT ladder at your own organization or enhancing your resume while you attend school to get a degree. WGU also offers opportunities for students to earn valuable certifications along the way, boosting your resume even more, before you even graduate. Machine learning is an in-demand field and it's valuable to enhance your credentials and understanding so you can be prepared to be involved in it.

Continued here:
Machine Learning: Definition, Explanation, and Examples

Posted in Uncategorized

Deploying machine learning to improve mental health | MIT News | Massachusetts Institute of Technology – MIT News

A machine-learning expert and a psychology researcher/clinician may seem an unlikely duo. But MITs Rosalind Picard and Massachusetts General Hospitals Paola Pedrelli are united by the belief that artificial intelligence may be able to help make mental health care more accessible to patients.

In her 15 years as a clinician and researcher in psychology, Pedrelli says it's been very, very clear that there are a number of barriers for patients with mental health disorders to accessing and receiving adequate care. Those barriers may include figuring out when and where to seek help, finding a nearby provider who is taking patients, and obtaining financial resources and transportation to attend appointments.

Pedrelli is an assistant professor in psychology at the Harvard Medical School and the associate director of the Depression Clinical and Research Program at Massachusetts General Hospital (MGH). For more than five years, she has been collaborating with Picard, an MIT professor of media arts and sciences and a principal investigator at MITs Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) on a project to develop machine-learning algorithms to help diagnose and monitor symptom changes among patients with major depressive disorder.

Machine learning is a type of AI technology where, when the machine is given lots of data and examples of good behavior (i.e., what output to produce when it sees a particular input), it can get quite good at autonomously performing a task. It can also help identify patterns that are meaningful, which humans may not have been able to find as quickly without the machine's help. Using wearable devices and smartphones of study participants, Picard and Pedrelli can gather detailed data on participants skin conductance and temperature, heart rate, activity levels, socialization, personal assessment of depression, sleep patterns, and more. Their goal is to develop machine learning algorithms that can intake this tremendous amount of data, and make it meaningful identifying when an individual may be struggling and what might be helpful to them. They hope that their algorithms will eventually equip physicians and patients with useful information about individual disease trajectory and effective treatment.

We're trying to build sophisticated models that have the ability to not only learn what's common across people, but to learn categories of what's changing in an individuals life, Picard says. We want to provide those individuals who want it with the opportunity to have access to information that is evidence-based and personalized, and makes a difference for their health.

Machine learning and mental health

Picard joined the MIT Media Lab in 1991. Three years later, she published a book, Affective Computing, which spurred the development of a field with that name. Affective computing is now a robust area of research concerned with developing technologies that can measure, sense, and model data related to peoples emotions.

While early research focused on determining if machine learning could use data to identify a participants current emotion, Picard and Pedrellis current work at MITs Jameel Clinic goes several steps further. They want to know if machine learning can estimate disorder trajectory, identify changes in an individuals behavior, and provide data that informs personalized medical care.

Picard and Szymon Fedor, a research scientist in Picards affective computing lab, began collaborating with Pedrelli in 2016. After running a small pilot study, they are now in the fourth year of their National Institutes of Health-funded, five-year study.

To conduct the study, the researchers recruited MGH participants with major depression disorder who have recently changed their treatment. So far, 48 participants have enrolled in the study. For 22 hours per day, every day for 12 weeks, participants wear Empatica E4 wristbands. These wearable wristbands, designed by one of the companies Picard founded, can pick up information on biometric data, like electrodermal (skin) activity. Participants also download apps on their phone which collect data on texts and phone calls, location, and app usage, and also prompt them to complete a biweekly depression survey.

Every week, patients check in with a clinician who evaluates their depressive symptoms.

We put all of that data we collected from the wearable and smartphone into our machine-learning algorithm, and we try to see how well the machine learning predicts the labels given by the doctors, Picard says. Right now, we are quite good at predicting those labels.

Empowering users

While developing effective machine-learning algorithms is one challenge researchers face, designing a tool that will empower and uplift its users is another. Picard says, The question were really focusing on now is, once you have the machine-learning algorithms, how is that going to help people?

Picard and her team are thinking critically about how the machine-learning algorithms may present their findings to users: through a new device, a smartphone app, or even a method of notifying a predetermined doctor or family member of how best to support the user.

For example, imagine a technology that records that a person has recently been sleeping less, staying inside their home more, and has a faster-than-usual heart rate. These changes may be so subtle that the individual and their loved ones have not yet noticed them. Machine-learning algorithms may be able to make sense of these data, mapping them onto the individuals past experiences and the experiences of other users. The technology may then be able to encourage the individual to engage in certain behaviors that have improved their well-being in the past, or to reach out to their physician.

If implemented incorrectly, its possible that this type of technology could have adverse effects. If an app alerts someone that theyre headed toward a deep depression, that could be discouraging information that leads to further negative emotions.Pedrelli and Picard are involving real users in the design process to create a tool thats helpful, not harmful.

What could be effective is a tool that could tell an individual The reason youre feeling down might be the data related to your sleep has changed, and the data relate to your social activity, and you haven't had any time with your friends, your physical activity has been cut down. The recommendation is that you find a way to increase those things, Picard says. The team is also prioritizing data privacy and informed consent.

Artificial intelligence and machine-learning algorithms can make connections and identify patterns in large datasets that humans arent as good at noticing, Picard says. I think there's a real compelling case to be made for technology helping people be smarter about people.

Follow this link:
Deploying machine learning to improve mental health | MIT News | Massachusetts Institute of Technology - MIT News

Posted in Uncategorized