BTC Breaks $12k on the Eve of Grayscale’s Bitcoin TV Ad Campaign – Ethereum World News

Quick take:

Just yesterday, crypto traders and enthusiasts were optimistic that Bitcoin (BTC) would close the week above the $11,500 support zone. As it so happens, Bitcoin did more than close above the preferred value of $11,500 but went as far as to retest the $12k ceiling and print a daily peak of $12,077 Binance rate.

The move up by Bitcoin to $12k levels has been attributed to Grayscales national ad campaign that starts today, August 10th. According to Barry Silbert, the founder of Grayscale, the campaign ads will be on the major US TV stations of CNBC, MSNBC, FOX and Fox Business. Mr. Silbert also added that the goal of the ad campaign is to bring crypto to the masses. His comments can be found in the following Tweet.

The Managing Director of Grayscale, Michael Sonnenshein, went further and announced that the first ad will be aired on CNBC at 7 am ET. Below is his Tweet making the announcement.

As earlier mentioned, Bitcoin bulls were optimistic that BTC would close the week at a value above $11,500. The last time Bitcoin closed the week above this price level was in January 2018. Therefore, by achieving this feat only hours ago, Bitcoin has confirmed that it is still in bullish territory. Furthermore, it also indicates that the value of BTC has more room to grow.

What remains to be seen in the hours or days to follow, is whether Bitcoin successfully turns the $12k price area into a support zone. At the time of writing, trade volume is in the green and the 6-hour MACD is about to cross in a bullish manner pointing to a possible second drive up for Bitcoin as it attempts to permanently claim $12k.

As with all analyses of Bitcoin, traders and investors are advised to use adequate stop losses as well as low leverage to protect trading capital.

Excerpt from:
BTC Breaks $12k on the Eve of Grayscale's Bitcoin TV Ad Campaign - Ethereum World News

Public Fascination with Bitcoin Price is Slowing the Adoption of Bitcoin – hackernoon.com

@MarkHelfmanMark

Author, Consensusland: A Cryptocurrency Utopia. Editor, Crypto is Easy newsletter. #1 writer, Medium

Few people ask me about the social, political,and economicimpactof cryptographically-secure,time-stamped distributedledgers.

(Which stinks, I wrote a book,Consensusland, about that.)

No, most people ask should I buy bitcoin?

They seem interested in whether they can make money from its price going up.

So youd think the facts would convince them to buy bitcoin, right?

After all, its price has tripled over the past 18 months. Its up more than 50% so far this year and almost never finishes a year lower than where it started. Institutional investment in bitcoin funds grew more in the first half of this year than all previous years combined.

Nope, not enough.

Facts and history will not convince people to buy bitcoin. It will take something much more powerful.

Fortunately, that something is here.

Investors dont have any good ways to make money anymore. Traditional investments involve more risk and lower returns than ever before.

Thanks to the pandemic, you cant invest in the real economy. Nobodys making movies or going on cruises. Nobodys going to the theatre or sporting events. Nobody knows when (or if) building starts and big infrastructure projects will get off the ground.

Thanks to central banks, you cant invest in equities, cash, or debt, either.

The stock markets are full of businesses that have no profits or customers. Many corporations have stopped buying back shares. High P/E ratios suggest poor future returns and nobody knows whether the economy will rebound. For many companies, profits have dried up, making it hard for them to pay dividends.

(People like to say bitcoin doesnt offer dividends, but what happens when stocks dont either?)

Most major economies offer negative-yielding debt and US treasury notes rates remain effectively zero. Corporate debt is almost worthless, outside of a few bankrupt businesses waiting for somebody to take them over. Savings accounts pay maybe 1% if youre lucky.

Private equity, perhaps?

Perhaps not. Start-ups are strapped for cash and struggling to conquer COVID-19.

You cant even invest in banks anymore. European banks are barely solvent and the U.S. Federal Reserve stopped its banks from buying back stock and raising dividends, two of the biggest incentives for investors.

China and U.S. trade relations have fallen apart, so you cant invest in China. The E.U. might fall apart, so you cant invest in Europe.

As an investor, you want to find ways to maximize opportunities and minimize risks. In this new investment landscape, that means making unusual choices.

For example, money has started flowing to emerging markets, despite an ever-growing list of countries defaulting or restructuring their debt.

Why do investors feel compelled to buy investments in countries that probably will never repay them?

As always, you have speculators looking to flip bonds, but mostly, its just investors looking for yields. Unlike junk bonds and penny stocks, emerging markets have special financial instruments that protect investors from some of the downside risks.

Plus, unlike corporations, these countries can raise taxes when they fall short on payments. Meanwhile, massive QE suggests the value of the dollar will fall, making emerging market debts easier to repay over time.

Why buy junk bonds and penny stocks when you can get a higher return with less risk in emerging market debt?

This problem exists because of the so-called liquidity traplots of money, little yield, and people too scared to spend.

When you have no incentive to invest, you dont invest. Why give up cash and property when your expected risk-adjusted returns are basically zero?

Some people think that this liquidity trap has created a massive everything bubble where equities, businesses, bonds, property, and everything else gets pumped up beyond their real values.

Surelysomethinghas to give, right?

Economist Robert Shiller won a Nobel prize for his work on assets and how assets acquire value. He discovered that price is a function of peoples actions and behaviors. Markets are not efficient. Asset bubbles only pop when people stop believing in them.

Shiller would say its more nuanced than that, which is true, but Im summarizing decades of research into a paragraph. Thats the easiest way I can explain it.

In other words, the bubble may never popif its even a bubble in the first place. It will just persist, skewing peoples economic decisions, until people decide to change their behaviors.

Those behaviors will have to change eventually.

Money tends to flow into the hands of whoever can do the most with it. As asset prices rise, investments no longer produce as much yield as they did before. You need to spend more to make less.

At some point, investors will have to find better options. With $3 trillion sitting in U.S. bank accounts, $22 trillion in U.S.-registered investment funds, and at least $40 trillion in private wealth held offshore, plus trillions more in cash and real estate, theres a lot of money searching for yields.

Investors know this.

Recently, banks and large investment institutions got U.S. regulators to allow them to buy private equity, a market filled with small businesses that have never turned a profit.

At what point do money managers feel compelled to put some of their clients money into bitcoin, the best performing asset of the past ten years? Or, place a small wager on a token sale, like Harvard did?

Bitcoins price. It always seems to crash.

As long as bitcoins price always seems to crash, people will not put their money into it. We just need the price to go up long enough for people to start believing it will continue to go up.

At that point, everything will change. People will start to think they can make money from cryptocurrency. Theyll think its a better deal than cash, bonds, and stocks.

The search for yield is a very powerful motivator.

Subscribe to get your daily round-up of top tech stories!

Visit link:
Public Fascination with Bitcoin Price is Slowing the Adoption of Bitcoin - hackernoon.com

Data Annotation- Types, Tools, Benefits, and Applications in Machine Learning – Customer Think

It is unarguably true that the advent of machine learning and artificial intelligence has brought a revolutionary change in various industries globally. Both these technologies have made applications and machines way smarter than our imaginations. But, have you ever wondered how AI and ML work or how they make machines act, think, and behave like human beings.

To understand this, you have to dig deeper into the technical things. It is actually the trained data sets that do the magic to create automated machines and applications. These data sets are further needed to be created and trained through a process named Data annotation.

Data annotation is the technique of labeling the data, which is present in different formats such as images, texts, and videos. Labeling the data makes objects recognizable to computer vision, which further trains the machine. In short, the process helps the machine to understand and memorize the input patterns.

To create a data set required for machine learning, different types of data annotation methods are available. The prime aim of all these types of annotations is to help a machine to recognize text, images, and videos (objects) via computer vision.

Bounding boxesLines and splinesSemantic segmentation3D cuboidsPolygonal segmentationLandmark and key-pointImages and video annotationsEntity annotationContent and text categorization

Lets read them in detail:

The most common kind of data annotation is bounding boxes. These are the rectangular boxes used to identify the location of the object. It uses x and y-axis coordinates in both the upper-left and lower-right corners of the rectangle. The prime purpose of this type of data annotation is to detect the objects and locations.

This type of data annotation is created by lines and splines to detect and recognize lanes, which is required to run an autonomous vehicle.

This type of annotation finds its role in situations where environmental context is a crucial factor. It is a pixel-wise annotation that assigns every pixel of the image to a class (car, truck, road, park, pedestrian, etc.). Each pixel holds a semantic sense. Semantic segmentation is most commonly used to train models for self-driving cars.

This type of data annotation is almost like bounding boxes but it provides extra information about the depth of the object. Using 3D cuboids, a machine learning algorithm can be trained to provide a 3D representation of the image.

The image can further help in distinguishing the vital features (such as volume and position) in a 3D environment. For instance- 3D cuboids help driverless cars to utilize the depth information to find out the distance of objects from the vehicle.

Polygonal segmentation is used to identify complex polygons to determine the shape and location of the object with the utmost accuracy. This is also one of the common types of data annotations.

These two annotations are used to create dots across the image to identify the object and its shape. Landmark and key-point annotations play their role in facial recognitions, identifying body parts, postures, and facial expressions.

Entity annotation is used for labeling unstructured sentences with the relevant information understandable by a machine. It can be further categorized into named entity recognition and intent extraction.

Data annotation offers innumerable advantages to machine learning algorithms that are responsible for training predicting data. Here are some of the advantages of this process:

Enhanced user experience

Applications powered by ML-based trained models help in delivering a better experience to end-users. AI-based chatbots and virtual assistants are a perfect example of it. The technique makes these chatbots to provide the most relevant information in response to a users query.

Improved precision

Image annotations increase the accuracy of output by training the algorithm with huge data sets. Leveraging these data sets, the algo will learn various kinds of factors that will further assist the model to look for the suitable information in the database.

The most common annotation formats include:

COCOYOLOPascal VOC

By now, you must be aware of the different types of data annotations. Lets check out the applications of the same in machine learning:

Sequencing- It includes text and time series and a label.

Classification- Categorizing the data into multiple classes, one label, multiple labels, binary classes, and more.

Segmentation- It is used to search the position where a paragraph splits, search transitions between different topics, and for various other purposes.

Mapping- It can be done for language to language translation, to convert a complete text into the summary, and to accomplish other tasks.

Check out below some of the common tools used for annotating images:

RectlabelLabelMeLabelImgMakeSense.AIVGG image annotator

In this article, we have mentioned what data annotation or labeling is, and what are its types and benefits. Besides this, we have also listed the top tools used for labeling images. The process of labeling texts, images, and other objects help ML-based algorithms to improve the accuracy of the output and offer an ultimate user experience.

A reliable and experienced machine learning company holds expertise on how to utilize these data annotations for serving the purpose an ML algorithm is being designed for. You can contact such a company or hire ML developers to develop an ML-based application for your startup or enterprise.

Read More: How does Machine Learning Revolutionizing the Mobile Applications?

Originally posted here:
Data Annotation- Types, Tools, Benefits, and Applications in Machine Learning - Customer Think

Computer vision: Why its hard to compare AI and human perception – TechTalks

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Human-level performance. Human-level accuracy. Those are terms you hear a lot from companies developing artificial intelligence systems, whether its facial recognition, object detection, or question answering. And to their credit, the recent years have seen many great products powered by AI algorithms, mostly thanks to advances in machine learning and deep learning.

But many of these comparisons only take into account the end-result of testing the deep learning algorithms on limited data sets. This approach can create false expectations about AI systems and yield dangerous results when they are entrusted with critical tasks.

In a recent study, a group of researchers from various German organizations and universities have highlighted the challenges of evaluating the performance of deep learning in processing visual data. In their paper, titled, The Notorious Difficulty of Comparing Human and Machine Perception, the researchers highlight the problems in current methods that compare deep neural networks and the human vision system.

In their research, the scientist conducted a series of experiments that dig beneath the surface of deep learning results and compare them to the workings of the human vision system. Their findings are reminder that we must be cautious when comparing AI to humans, even if it shows equal or better performance on the same task.

In the seemingly endless quest to reconstruct human perception, the field that has become known as computer vision, deep learning has so far yielded the most favorable results. Convolutional neural networks (CNN), an architecture often used in computer vision deep learning algorithms, are accomplishing tasks that were extremely difficult with traditional software.

However, comparing neural networks to the human perception remains a challenge. And this is partly because we still have a lot to learn about the human vision system and the human brain in general. The complex workings of deep learning systems also compound the problem. Deep neural networks work in very complicated ways that often confound their own creators.

In recent years, a body of research has tried to evaluate the inner workings of neural networks and their robustness in handling real-world situations. Despite a multitude of studies, comparing human and machine perception is not straightforward, the German researchers write in their paper.

In their study, the scientists focused on three areas to gauge how humans and deep neural networks process visual data.

The first test involves contour detection. In this experiment, both humans and AI participants must say whether an image contains a closed contour or not. The goal here is to understand whether deep learning algorithms can learn the concept of closed and open shapes, and whether they can detect them under various conditions.

For humans, a closed contour flanked by many open contours perceptually stands out. In contrast, detecting closed contours might be difficult for DNNs as they would presumably require a long-range contour integration, the researchers write.

For the experiment, the scientists used the ResNet-50, a popular convolutional neural network developed by AI researchers at Microsoft. They used transfer learning to finetune the AI model on 14,000 images of closed and open contours.

They then tested the AI on various examples that resembled the training data and gradually shifted in other directions. The initial findings showed that a well-trained neural network seems to grasp the idea of a closed contour. Even though the network was trained on a dataset that only contained shapes with straight lines, it could also performed well on curved lines.

These results suggest that our model did, in fact, learn the concept of open and closed contours and that it performs a similar contour integration-like process as humans, the scientists write.

However, further investigation showed that other changes that didnt affect human performance degraded the accuracy of the AI models results. For instance, changing the color and width of the lines caused a sudden drop in the accuracy of the deep learning model. The model also seemed to struggle with detecting shapes when they became larger than a certain size.

The neural network was also very sensitive to adversarial perturbations, carefully crafted changes that are imperceptible to the human eye but cause disruption in the behavior of machine learning systems.

To further investigate the decision-making process of the AI, the scientists used a Bag-of-Feature network, a technique that tries to localize the bits of data that contribute to the decision of a deep learning model. The analysis proved that there do exist local features such as an endpoint in conjunction with a short edge that can often give away the correct class label, the researchers found.

The second experiment tested the abilities of deep learning algorithms in abstract visual reasoning. The data used for the experiment is based on the Synthetic Visual Reasoning Test (SVRT), in which the AI must answer questions that require understanding of the relations between different shapes in the picture. The tests include same-different tasks (e.g., are two shapes in a picture identical?) and spatial tasks (e.g., is the smaller shape in the center of the larger shape?). A human observer would easily solve these problems.

For their experiment, the researchers use the ResNet-50 and tested how it performed with different sizes of training dataset. The results show that a pretrained model finetuned on 28,000 samples performs well both on same-different and spatial tasks. (Previous experiments trained a very small neural network on a million images.) The performance of the AI dropped as the researchers reduced the number of training examples, but degradation in same-different tasks was faster.

Same-different tasks require more training samples than spatial reasoning tasks, the researchers write, adding, this cannot be taken as evidence for systematic differences between feed-forward neural networks and the human visual system.

The researchers note that the human visual system is naturally pre-trained on large amounts of abstract visual reasoning tasks. This makes it unfair to test the deep learning model on a low-data regime, and it is almost impossible to draw solid conclusions about differences in the internal information processing of humans and AI.

It might very well be that the human visual system trained from scratch on the two types of tasks would exhibit a similar difference in sample efficiency as a ResNet-50, the researchers write.

The recognition gap is one of the most interesting tests of visual systems. Consider the following image. Can you tell what it is without scrolling further down?

Below is the zoomed-out view of the same image. Theres no question that its a cat. If I showed you a close-up of another part of the image (perhaps the ear), you might have had a greater chance of predicting what was in the image. We humans need to see a certain amount of overall shapes and patterns to be able to recognize an object in an image. The more you zoom in, the more features youre removing, and the harder it becomes to distinguish what is in the image.

Deep learning systems also operate on features, but they work in subtler ways. Neural networks sometimes the find minuscule features that are imperceptible to the human eye but remain detectable even when you zoom in very closely.

In their final experiment, the researchers tried to measure the recognition gap of deep neural networks by gradually zooming in images until the accuracy of the AI model started to degrade considerably.

Previous experiments show a large difference between the image recognition gap in humans and deep neural networks. But in their paper, the researchers point out that most previous tests on neural network recognition gaps are based on human-selected image patches. These patches favor the human vision system.

When they tested their deep learning models on machine-selected patches, the researchers obtained results that showed a similar gap in humans and AI.

These results highlight the importance of testing humans and machines on the exact same footing and of avoiding a human bias in the experiment design, the researchers write. All conditions, instructions and procedures should be as close as possible between humans and machines in order to ensure that all observed differences are due to inherently different decision strategies rather than differences in the testing procedure.

As our AI systems become more complex, we will have to develop more complex methods to test them. Previous work in the field shows that many of the popular benchmarks used to measure the accuracy of computer vision systems are misleading. The work by the German researchers is one of many efforts that attempt to measure artificial intelligence and better quantify the differences between AI and human intelligence. And they draw conclusions that can provide directions for future AI research.

The overarching challenge in comparison studies between humans and machines seems to be the strong internal human interpretation bias, the researchers write. Appropriate analysis tools and extensive cross checks such as variations in the network architecture, alignment of experimental procedures, generalization tests, adversarial examples and tests with constrained

networks help rationalizing the interpretation of findings and put this internal bias into perspective. All in all, care has to be taken to not impose our human systematic bias when comparing human and machine perception.

Link:
Computer vision: Why its hard to compare AI and human perception - TechTalks

OWC Sending Customer Content to Outer Space on the Envoy Pro – PRNewswire

WOODSTOCK, Ill., Aug. 10, 2020 /PRNewswire/ -- OWC; a leading technology and new frontiers innovator, bringing new capabilities to Earth for Mac & PC users since 1988, and one of the world's most respected providers of Memory, External Drives, SSDs, Mac & PC docking solutions and performance upgrade kits, today announced that the Envoy Pro Thunderbolt 3 external SSD will be going into space and returning with a leading space exploration developer's upcoming launch. OWC is holding a contest for creatives to submit their videos, songs and images for consideration, for the chance to have that creative content included on the drive when it is sent into space.

Entrants are challenged to show the team at OWC what they've created. Participants can submit a video, a song, image(s), or any other type of content they have produced using an OWC product. Entries should be uploaded to Facebook, Twitter and/or Instagram calling out @PoweredbyOWC and using #OWCInSpace in order for all content to be properly evaluated by the OWC team. Posts should mention which OWC product was used in the creative process. Winners will be contacted by DM, so be sure to follow @PoweredbyOWC on those platforms.

The contest will begin accepting submissions on August 10th and will do so through August 21st.All submissions will be evaluated by executive and creative team members at OWC. Selected winning entrants will be uploaded onto the OWC Envoy Pro and launched into orbit with the September 2020 launch.

Prizes:One grand-prize winning entry will receive a 16" MacBook Pro, an LG 32" IPS 4K Thunderbolt monitor, and a specially-engraved OWC Envoy Pro. The top ten first-prize winning selected content providers will receive a specially engraved version of the OWC Envoy Pro drive. All winning submissions will receive a certificate of participation and a commemorative patch following the rocket launch and return. All prizes will be distributed following the launch and return of the rocket.

Contest Guidelines:Contestants are asked to upload an original video or song between one to two minutes in length, or an image or multiple images. Show OWC your out-of-this-world work, and show the world why OWC solutions are the key to unlocking true creative potential.

Guidelines for submission: all entries should be English-language content only, or if in another language, please use English subtitles. All voting results will be final, and the winners will be notified by DM, so be sure to follow @PoweredbyOWC on Facebook, Twitter and/or Instagram. Entering the contest is easy, just post content beginning August 10th.

For contest details visit: OWC in Space. Key points are:

"We have known for many years that our customers include some of the most talented and creative people around, and we want to give them the chance to have a part in this adventure with us," said Larry O'Connor, Founder and CEO, Other World Computing. "OWC is proud to provide storage and upgrades that keep our customers' content and creations safe for years, and we can't wait to see the entries, get them on the space-bound Envoy Pro, and back here to Earth!"

Send your family into space In addition to the contest, OWC will also be sending photos into space! Open to everyone 18 and older, the photos can be of anything that is significant to the photographer a family photo, a pet, a travel image, a selfie something important that you'd like to share with the galaxy! The collected images will be uploaded to the Envoy Pro, and contributors will receive a certificate of participation following space travel. Images should be within community standards; OWC will not utilize nor acknowledge any images outside the parameters. Anyone submitting an image will need a verifiable email in order to receive certificate of participation. Images can be uploaded through the OWC website.

Open to legal US residents 18 and over. Limit one entry per person. Entrants must comply with the submission policy. OWC reserves the right to disqualify any submission that does not follow the guidelines and content restrictions listed in the terms and conditions. OWC reserves the right to utilize every entry for promotional purposes. Winners will be notified via DM; be sure to follow @PoweredbyOWC on Facebook, Twitter and/or Instagram. Prizes are nontransferable and no substitution will be made. Entrants agree to receive OWC special offers via email. Void where prohibited. For submission policy information please visit: https://eshop.macsales.com/service/ideasolicitation.cfm.

OWC respects our community's First Amendment right to freedom of speech. However, in accordance with our community standards, we reserve the right to reject all material that is obscene, offensive, insulting, derogatory, defamatory, and intimidating to any and all classes of individuals.

About OWC Other World Computing (OWC), founded in 1988, is dedicated to helping Mac and PC enthusiasts do more and reach higher. We believe in sustainability and OWC solutions are truly built to last, go the distance, and enable users to maximize the technology investment they have already made. OWC's operation provides leadership in business sustainability, with our headquarters among the first in the world awarded LEED Platinum OWC features an award-winning technical support team as well as an unparalleled library of step-by-step DIY and informational videos. From the home desktop to the enterprise rack, to the audio recording studio to the motion picture set and beyond, there should be no compromise, and that is why OWC is here.

Get social: follow OWC on Facebook, Instagram, YouTube and Twitter.

2020 Other World Computing, Inc. All rights reserved. Apple and Mac are the trademarks of Apple Inc., registered in the U.S. and other countries. Intel and Thunderbolt are trademarks of Intel Corporation registered in the U.S. and/or other countries. Other marks may be the trademark or registered trademark property of their respective owners.

#Thunderbolt3 @getthunderbolt

SOURCE OWC

Read the original here:

OWC Sending Customer Content to Outer Space on the Envoy Pro - PRNewswire

Scripps Howard Foundation to award $600,000 to advance diversity in journalism – PRNewswire

CINCINNATI, Aug. 10, 2020 /PRNewswire/ -- As part of its commitment to support equity, diversity and inclusion within the journalism industry, the Scripps Howard Foundation will award a total of $600,000 to institutions of higher education to enhance or create programs that will inspire high school students to embark on journalism careers.

The Foundation will host a competitive application process to select two institutions, which will each receive $100,000 a year over three years.

The Foundation, the philanthropic organization of The E.W. Scripps Company (NASDAQ: SSP), is seeking to fund two programs that will:

The programs will be funded through a generous gift from Eli and Jaclynn Scripps and Jonathan and Brooke Scripps.

"Advancing equity, diversity and inclusion within the journalism industry is a priority of the Scripps Howard Foundation, its benefactors and its parent company, The E.W. Scripps Company," said Scripps Howard Foundation President and CEO Liz Carter. "We know the industry has a long way to go toward hiring talent and editorial staff that reflects the make-up of its increasingly diverse audiences. We believe these programs, with their emphasis on mentorship and real-world reporting experience, are an important step toward that goal."

The Foundation and its parent company, Scripps, have committed to increasing diversity in journalism through a variety of programs. More information about Scripps' equity, diversity and inclusion approach can be found here.

The deadline to submit a Letter of Intent is Sept. 15, 2020. The Foundation will review those responses and invite a select group to respond to a full Request for Proposals (RFP). The programs are expected to launch by the 2021-2022 academic year.

More information on how to submit a Letter of Intent can be found here.

About the Scripps Howard FoundationTheScripps Howard Foundationsupports philanthropic causes important to The E.W. Scripps Company (NASDAQ: SSP) and the communities it serves, with a special emphasis on excellence in journalism. At the crossroads of the classroom and the newsroom, the Foundation is a leader in supporting journalism education, scholarships, internships, minority recruitment and development, literacy and First Amendment causes. The Scripps Howard Awards stand as one of the industry's top honors for outstanding journalism. The Foundation improves lives and helps build thriving communities. It partners with Scripps brands to create awareness of local issues and supports impactful organizations to drive solutions.

SOURCE The E.W. Scripps Company

http://www.scripps.com

Go here to read the rest:

Scripps Howard Foundation to award $600,000 to advance diversity in journalism - PRNewswire

McFeatters: What kind of country will America be? – The Columbian

Are we going to tell parents they have to choose between their jobs or watching over their children?

Are we going to help the struggling middle class and small-business owner or give another round of tax cuts to the wealthy so they can buy a baby blue Lamborghini and more stock?

Will we offer refuge to persecuted families from other lands seeking a part of the American dream? Or just announce that dream is dead. Doors shut.

Are we going to ensure that every eligible American can vote, vote safely and have that vote counted? Or are we the country that will do our best to make sure that the rich and well-off, with currently approved skin tones, are the ones who control the future?

Are we going to do everything we possibly can to keep foreign interference out of our elections? Or just accept that the foreign hackers are here, well entrenched, welcomed by those in power and active? So what?

Are we going to continue to be that country whose top law enforcement official sends jackbooted thugs into cities to beat up protesters and snubs his nose at members of Congress questioning his actions? Or are we going to realize that law and order and the Constitution, including the First Amendment, are compatible.

Will we be the people who provide proper personal protective gear for medical workers and first responders? Or will we be the country that tolerates corruption running rampant in procurement and contracting, advocates ineffective and dangerous treatments, and assures people all is well when it isnt?

Will we hold everyone to the same rule of law or will we permit the powerful and favored few to become wealthy beyond imagination at our expense?

Are we going to rebuild our roads, bridges, ports and electric grid? Or do we spend the money on big corporations, hoping they will build a little in exchange for becoming too big to fail?

Are we going to help save the world from extreme temperatures, famines, droughts, flooding, plagues and dramatic loss of species? Or will we work with other nations to stop manmade damage to the environment?

Do we want to close our borders to those who werent born Americans? Or do we want to encourage young scholars to come to America, study in our universities, learn our culture and help make more corners of the Earth better off, giving back to us as much as they get along the way?

Do we want to know that what our political leaders tell us is the truth, even when it is unpleasant, or continue to shrug our shoulders at what we are told because everyone knows it is all lies?

Do we want continued outrage and drama and titillation? Or do we seek measured response, competence, fairness and civility?

See more here:

McFeatters: What kind of country will America be? - The Columbian

WhatsApp Users To Get This Ground-Breaking New Upgrade: Just Perfect Timing – Forbes

SOPA Images/LightRocket via Getty Images

WhatsApp may have been on a tear recently, introducing new features and updates to keep ahead of the pack, but the big one were all waiting for isnt here yet but at least its getting closer. And now we have the first solid indication as to how WhatsApp has cracked the major challenge in making this work. This will spell fantastic news for many of its 2 billion users worldwide.

As Ive reported before, the biggest missing feature with WhatsApp is its pitiful options for multiple device access. Its desktop app is clunkyand thats being kind. Its iPad app non-existent. We all know this is in the works to be fixed. Whats been unclear, though, is how that might be done while maintaining full usability on each and every one of those (up to four) linked devices.

Well, according to the ever-reliable WABetaInfo, WhatsApp has now nailed this. And, if true, thats a genuine ground-breaking achievement for the platform. It will make using WhatsApp seamless, from your phone(s) to your iPad to your desktop. And no more clunky front-end to the message store on your primary phone. This will work even if that main device is not switched on or online.

For iPad owners in particular, who have such great multiple platform options from most leading messengers now, this will be brilliant news. As WABetaInfo reports, WhatsApp has also developed aniPad app, that will be released after the activation of the feature, so you will be able to use WhatsApp on your iPhone and your iPad at the same time.

Why is this so difficult? It all comes down to end-to-end encryption. Clearly, introducing linked devices means that you need to ensure the end-to-end encryption security extends to multiple endpoints on each side of a conversation, whether person-to-person or within groups. Thats challenging but achievable. The issue, though, is that to maintain a full user experience you need to sync the entire message history across each of those devices and keep them aligned. Thats significantly harder.

Now, according to the latest indicationsgathered by WABetaInfo from within code hidden in the new beta releases of WhatsApp, the likelihood is that WhatsApp will use a local connection to transfer the message history from device to device over wifi. This means the whole process can maintain the security of that transferno external cloud service is needed, which would be a vulnerability. No word on when this will be launched, but its closer now than its ever been before.

WhatsApps closest rivalby feature if not install baseSignal, takes a similar approach to transferring an account from an old phone to a new one. But every one of its linked devices is a separate instance, with its message history limited to the time window during which it is linked. The reported WhatsApp approach is a significant step-up from that.

WABetaInfo

If those message historieswhich can be very large, mine is now 11 gigabytes, remain syncd, it extends WhatsApps famed usability into this new dimension. And while this is undoubtedly less locked down than the Signal approach, it will be perfect for almost all users. It will also be perfectly timedwith Signal and Telegram, both of which currently outdo WhatsApp on the multiple device front, fast making up ground.

There is an interesting twist behind the scenes here. The other (in my view) serious update coming from WhatsApp is to extend end-to-end encryption to cloud backups. Right now, when you backup chats to Googles or Apples cloud, you only have the protection of their encryption over your backupnot WhatsApps end-to-end protection. That means law enforcement or others can access your content with keys held by those platforms. The new update will fix this, extending the same protection from your devices to your backups.

WABetaInfo

This backup update will essentially offload a secure, central repository of your message history and media to an offline cloud service. This could provide the basis for a secure restore or even a secure live sync capabilityalthough an ongoing sync would require that backup to be decrypted and accessed while at rest in the cloud, without compromising security. That is likely a step too far, without full control over the cloud and device software, as is the case with Apples iMessage and iCloud syncing.

From what we know so far, its clear that WhatsApps considerable time spent in perfecting its approach to multiple device linking has encryption at its heart. If, for example, any of the linked devices of one of your contacts changes or if they link a new device, then you will be notified. Much in the same way as you can tell when a contact has new device or WhatsApp install.

And, ultimately, thats the most critical thing here. Other messengersincluding, of course, WhatsApps stablemate Facebook Messenger, have nailed multiple device access, but without the security we trust WhatsApp to provide. As much as we want this new functionality, we need it deployed without risking our data security and privacy. Hopefully this latest news shows that we will soon get what we both want and need.

See the article here:
WhatsApp Users To Get This Ground-Breaking New Upgrade: Just Perfect Timing - Forbes

The costs and benefits of artificial intelligence – The Japan Times

New York The robots are no longer coming; they are here. The COVID-19 pandemic is hastening the spread of artificial intelligence, but few have fully considered the short- and long-run consequences.

In thinking about AI, it is natural to start from the perspective of welfare economics productivity and distribution. What are the economic effects of robots that can replicate human labor? Such concerns are not new. In the 19th century, many feared that new mechanical and industrial innovations would replace workers. The same concerns are being echoed today.

Consider a model of a national economy in which labor performed by robots matches that performed by humans. The total volume of labor robotic and human will reflect the number of human workers, H, plus the number of robots, R. Here, the robots are additive they add to the labor force rather than multiplying human productivity. To complete the model in the simplest way, suppose the economy has just one sector, and that aggregate output is produced by capital and total labor, human and robotic. This output provides for the countrys consumption, with the rest going toward investment, thus increasing the capital stock.

What is the initial economic impact when these additive robots arrive? Elementary economics shows that an increase in total labor relative to initial capital a drop in the capital-labor ratio causes wages to drop and profits to rise.

There are three points to add. First, the results would be magnified if the additive robots were created from refashioned capital goods. That would yield the same increase in total labor, with a commensurate reduction in the capital stock, but the drop in the wage rate and the increase in the rate of profit would be greater.

Second, nothing would change if we adopted the Austrian Schools two-sector framework in which labor produces the capital good and the capital good produces the consumer good. The arrival of robots still would decrease the capital-labor ratio, as it did in the one-sector scenario.

Third, there is a striking parallel between the models additive robots and newly arrived immigrants in their impact on native workers. By pushing down the capital-labor ratio, immigrants, too, initially cause wages to drop and profits to rise. But it should be noted that with the rate of profit elevated, the rate of investment will rise. Owing to the law of diminishing returns, that additional investment will drive down the profit rate until it has fallen back to normal. At this point, the capital-labor ratio will be back to where it was before the robots arrived, and the wage rate will be pulled back up.

To be sure, the general public tends to assume that robotization (and automation generally) leads to a permanent disappearance of jobs, and thus to the immiseration of the working class. But such fears are exaggerated. The two models described above abstract from the familiar technological progress that drives up productivity and wages, making it reasonable to anticipate that the global economy will sustain some level of growth in labor productivity and compensation per worker.

True, sustained robotization would leave wages on a lower path than they otherwise would have taken, which would create social and political problems. It may prove desirable, as Bill Gates once suggested, to levy taxes on income from robot labor, just as countries levy taxes on income from human labor. This idea deserves careful consideration. But fears of prolonged robotization appear unrealistic. If robotic labor increased at a non-vanishing pace, it would run into limits of space, atmosphere, and so on.

Moreover, AI has brought not just additive robots but also multiplicative robots that enhance workers productivity. Some multiplicative robots enable people to work faster or more effectively (as in AI-assisted surgery), while others help people complete tasks they otherwise could not perform.

The arrival of multiplicative robots need not lead to a lengthy recession of aggregate employment and wages. Yet, like additive robots, they have their downsides. Many AI applications are not entirely safe. The obvious example is self-driving cars, which can (and have) run into pedestrians or other cars. But, of course, so do human drivers.

A society is not wrong, in principle, to deploy robots that are prone to occasional mistakes, just as we tolerate airplane pilots who are not perfect. We must judge costs and benefits. For efficiency, people ought to have the right to sue robots owners for damages. Inevitably, a society will feel uncomfortable with new methods that introduce uncertainty.

From the perspective of ethics, the interface with AI involves imperfect and asymmetric information. As Wendy Hall of the University of Southampton says, amplifying Nicholas Beale, We cant just rely on AI systems to act ethically because their objectives seem ethically neutral.

Indeed, some new devices can cause serious harm. Implantable chips for cognitive enhancement, for example, can cause irreversible tissue damage in the brain. The question, then, is whether laws and procedures can be instituted to protect people from a reasonable degree of harm. Barring that, many are calling on Silicon Valley companies to establish their own ethics committees.

All of this reminds me of the criticism leveled at innovations throughout the history of free-market capitalism. One such critique, the book Gemeinschaft und Gesellschaft by the sociologist Ferdinand Tonnies, ultimately became influential in Germany in the 1920s and led to the corporatism arising there and in Italy in the interwar period thus bringing an end to the market economy in those countries.

Clearly, how we address the problems raised by AI will be highly consequential. But they are not yet present on a wide scale, and they are not the main cause of the dissatisfaction and resulting polarization that have gripped the West.

Edmund S. Phelps, the 2006 Nobel laureate in economics and director of the Center on Capitalism and Society at Columbia University, is author of Mass Flourishing and co-author of Dynamism. 2020, Project Syndicate

Read the original post:
The costs and benefits of artificial intelligence - The Japan Times

How Will Artificial Intelligence Change the World of Sports? – The Union Journal

Today, the technological landscape is expanding by all leaps and bounds, and Artificial Intelligence (AI) remains in the thick of it. A technology that is one for the present and future, AI is playing a massive role in shaping businesses to the core. From healthcare and entertainment to commerce and sports, Artificial Intelligence is transforming every industrial vertical for good. Here is how artificial intelligence will change the world of sports.

Speaking of the sports industry itself, the presence of AI today is to be seen in just about every major league around the world. From NHL and NFL to NASCAR and NBA, AI has changed the way we think of the sports world today. Take an example of Northern Sports only, for instance. The industry crossed a staggering figure of $73.5 billion by the end of the year 2019.

Okay so, COVID-19 has caused a dent in the sports system and a problem for sports. But well get back on track! And, if you long to know more about how AI has impacted the sports world, here is a compiled, detailed list.

Its fun and interesting to understand how Artificial Intelligence will shape or change the world sporting ecosystem around the world.

AI is changing the Sports world for good.

Be it soccer, basketball, baseball, or any other game, analyzing the performance data of players has been an age-long factor in determining whether a

Read The Full Article

Post Views: 64

See original here:
How Will Artificial Intelligence Change the World of Sports? - The Union Journal