Jeffrey Epstein Was the Monster Capitalism Made – The Wire

At the centre of the sordid tale of Jeffrey Epstein lies a single, glaring truth: Epstein could never have done the unspeakable things he did if he hadnt existed in a world that allowed him to amass unlimited wealth.

Thats not the argument of Alana Goodman and Daniel HalpersA Convenient Death: The Mysterious Demise of Jeffrey Epstein, part of a spate of new reporting on Epsteins life, crimes, and outlandish death. As reporters for the right-wingWashington Examinerand various conservative media before that, its unlikely that the authors were aiming to write a parable about the perils of concentrated wealth in the hands of amoral financiers and the need to redistribute it.

Rather, Goodman and Halper have produced a well-reported, down-the-line book on the Epstein saga, a story that, by its very nature, makes that case for them. The story of Epstein and his crimes is impossible to untangle from the matter of wealth and power: who has it, who doesnt, what theyll do to get it, and the terrible things they can do once they have it.

A Convenient Death: The Mysterious Demise of Jeffrey EpsteinAlana Goodman and Daniel HalperBarnes & Noble

Epstein himself was New Money, his drive for riches fuelled, the authors report, by bitter memories of a dreary upbringing in an immigrant family firmly on the lower end of the middle class.

As they and other recent Epstein-centric media argue, these working-class roots drove Epstein to craft a lifestyle of gleaming luxury for himself, and made his eventual imprisonment in a rat-and-roach-infested Manhattan jail particularly traumatic.

As we now know all too well, Epstein used his charm, charisma, and lack of ethical scruples to springboard himself out of the low-income trap having dropped out of college and found himself working as a roofer and into a teaching position at a preppy high school he wasnt remotely qualified for, one he used as another stepping stone into the world of the elite. After that, it was only a few rungs more before Epstein had his hands on the near-limitless fountain of cash he soon used to construct the sexual pyramid scheme that structured his days, waving around the money he had always craved to lure young girls who had none of it.

Money and wealth explain virtually every facet of Epsteins crimes. How did he get away with these schemes for years under the noses of the Palm Beach Police Department, eventually escaping with barely a slap on the wrist? It may have helped that he had given the department and the city government tens of thousands of dollars, and hired a top prosecutors husband as his attorney.

How did he avoid even media scrutiny for as long he did? Epstein paid for glowing coverage that oversold his philanthropy, and dangled job offers in front of journalists. The evisceration of the now-defunct muckraking websiteGawkerat the hands of billionaire Peter Thiel made it even harder to report on him, argues one reporter, whose salacious, on-the-record piece about Epstein, his alleged co-conspirator Ghislaine Maxwell, and an unnamed billionaire, died at the hands of legal threats by the latter.

Also read: Disgraced US Financier Jeffrey Epstein Dies by Suicide: Media Reports

Money and wealth also explain Epsteins infamous network of relationships. We already know how the prospect of loose millions dropping from Epsteins pockets drew scientists and intellectuals to his amateur salons (What does that got to do with pussy!? were among Epsteins contributions to these intellectual soires).

And we also know the role money played in Epsteins friendship with Prince Andrew, who begged the pedophile financier to help bail out his debt-ridden ex-wife. So, too, does it explain his connection to Bill Clinton, who had privately made known his intent to spend roughly half my time making money after leaving the presidency, and whose Global Initiative at the Clinton Foundation, the authors report, may have gotten its seed funding from Epstein.

In fact, its the reporting on Clintons Epstein-related misadventures that may well draw the most interest in the book (andso far already has). Goodman and Halper tease out new details about the two mens relationship, demystifying it in a way that is both damning and exonerating for the former president.

On the one hand, multiple sources suggest Clinton wasnt having underage sex around Epstein. On the other, the reason hewashanging around the pedophile is little better for a man who continues to be one of the Democratic Partys leading lights: he was having an affair with Maxwell, the woman who procured and abused girls with Epstein. That this revelation might actually somewhatimprovehis public standing after years of speculation speaks to the legendarily bad judgment and greed of the former president.

Little St. James Island, one of the properties of financier Jeffrey Epstein, near Charlotte Amalie, St. Thomas, US Virgin Islands. Photo: Reuters/Marco Bello

Readers of the book will find plenty more details about the two mens friendship that reflect poorly on the former president, including the possibility, supported by circumstantial evidence, that it was his former national security advisor who tipped Epstein off to an impending police raid in 2005, allowing him to spirit away his computers and other electronics off the property before the authorities came knocking.

Suffice it to say, Goodman and Halper, together with Netflixsnew Epstein documentary, give a fairly definitive debunking to Clintonsoverwrought denialsinsisting not just that he didnt know about Epsteins crimes, but that he barely knew the man at all honest! Nonetheless, just as we saw with Russiagatesembarrassing flopin 2019, these expositions serve as a useful reminder that reality is often at least a degree or two more banal than the sometimes-wild conclusions that scraps of evidence seem to point to.

This is also a worthwhile lesson when it comes to the mystery at the centre of the book and, in retrospect, Epsteins life: why and how he died. Anyone looking for a definitive answer to whether Epstein was killed by his own hand or someone elses wont find it here indeed, its unlikely theyll find it anywhere. But Goodman and Halper comprehensively lay out the facts of Epsteins incarceration and death, devoting ample time to multiple theories.

Theres more than enough reason to believe Epstein may have marshalled his considerable resources to escape justice by killing himself, from his documented fear and unhappiness at the prospect of a life in squalid captivity, to changing his will in the eleventh hour, and the fact that he had already finagled some special privileges while in prison. Theres even evidence that Epstein may have viewed attempting suicide as a gambit to get transferred out of the facility. Still, too much exists to swallow the official story, from the series of mistakes and coincidences that gave Epstein the breathing room to die with no witnesses or surveillance footage, to the unusual to say the least autopsy results and treatment of the crime scene, to Epsteins determination to fight the case, and various other inconsistencies.

One thing seems clear: whether it was negligence or foul play, keeping Epstein alive was far from a priority for many powerful people, given not just what he knew, but what was almost certainly a sprawling blackmail operation he was running. If the ruling class had wanted him alive, Epstein probably would still be here today.

Also read: US Financier Jeffrey Epstein Charged With Sex Trafficking of Minor Girls

Just look at how Chelsea Manning, who committed a crime the US power eliteactuallycared about publicly revealing Americanwar crimes wasplaced on round-the-clock suicide watchfrom almost the moment she was arrested, locked for twenty-three-and-a-half hours a day in a tiny concrete hole with only a mirror, a lamp, and anti-suicide smock, stripped of all of her clothes, even her underwear and flip-flops, lest they be used in exactly the way Epstein allegedly used his clothing and bedding to off himself. Yes, they wanted her to suffer, but they also wanted to see her convicted in court, and they took no chances. Epstein didnt get the same treatment because at the end of the day, the people who run the world didnt care if or perhaps even prayed that he wouldnt make it that far.

There is only one definitive conclusion the authors come to: We dont need to know what happened to know weve probably been lied to. With Epstein gone, its now with his soul mate and alleged co-conspirator Maxwell that any further answers lie, though dont hold your breath: if she is ever taken into custody, its not hard to imagine history repeating itself.

Without answers, there will be no end to speculation about the truth of Epsteins life and death, and the true scale, depth, and nature of the criminal operation he was running. Whatever scenario you conjure, however outlandish, banal, or sinister it might be, never forget it could only be possible thanks to the economic and political system that Epstein and all of us were born into, but never asked for.

Branko Marcetic is a Jacobin staff writer and the author of Yesterdays Man: The Case Against Joe Biden. He lives in Toronto, Canada.

This article was published on Jacobin. Read the original here.

Read more from the original source:
Jeffrey Epstein Was the Monster Capitalism Made - The Wire

NTT Research Builds Upon its Micro Technologies and Cryptography Expertise with Distinguished New Hires – Yahoo Finance

Micro-Bio Technologies Expert Tetsuhiko Teshima Joins MEI Lab and Technical University of Munich (TUM); Cryptography Experts Vipul Goyal and Justin Holmgren Deepen CIS Labs Bench

NTT Research, Inc., a division of NTT (TYO:9432), today announced that it has named Dr. Tetsuhiko Teshima as a Research Scientist in its Medical and Health Informatics (MEI) Lab. Dr. Teshima has also joined the Technical University of Munich (TUM) Neuroelectronics Group as a Visiting Researcher. NTT Research and TUM last fall entered into a joint research agreement to explore implantable electronic systems to affect the future of patient care. An expert in micro technologies, Dr. Teshima will be working full-time at TUM in the area of advanced neuroelectronics and biosensor technology. Dr. Teshima began his three-year appointment on March 1, 2020.

Dr. Teshimas research has covered a broad range of topics that overlap with the MEI Labs mission, including micro bio-nano interfaces, parasitology, soft matter, hierarchical self-assembly, thin-film manufacturing techniques, soft lithography, microfluidics, revolutionary tools for single-cell measurements, mechano-biology and three-dimensional synthetic tissue and organs. He comes to NTT Research after holding positions at NTTs Bio-medical Informatics Research Center, the National Institute of Science and Technology Policy (NISTEP) and NTTs Basic Research Laboratories. He holds a M.S. (biology) and Ph.D. (information science and technology) from the University of Tokyo, where he also held a Japan Society for the Promotion of Science (JSPS) post-doctoral fellowship for three years at the Institute of Industrial Science.

"Dr. Teshima is a top young scientist in Japan who has made a mark in various areas of micro technologies," said MEI Lab Director Hitonobu Tomoike. "I expect a good chemical reaction between him and the brilliant scientists in Munich."

NTT Research also announced that it has named Vipul Goyal as Senior Scientist in its Cryptography and Information Security (CIS) Lab. Dr. Goyal is an associate professor of computer science, Carnegie Mellon University, which he joined in 2016. Previously, he spent seven years as a researcher in the Cryptography and Complexity Group at Microsoft Research, India. He is a winner of several honors, including a 2016 Association for Computing Machinery (ACM) Conference on Computer and Communications Security (CCS) test-of-time Award, a JP Morgan faculty fellowship, and a Google outstanding graduate student award. He received his Ph.D. at the University of California, Los Angeles.

Named to Forbes Magazines "30 Under 30" list of people changing science and healthcare in 2013, Dr. Goyal has published more than 80 technical papers. Broadly interested in all areas of cryptography, he has a particular focus on the foundations of the field. His current research topics include secure multi-party computation, non-malleable cryptography and foundations of blockchains.

Also joining the CIS Lab is Justin Holmgren as Scientist. Prior to his current role at NTT Research, Dr. Holmgren was a Google Research Fellow at the Simons Institute for the Theory of Computing. Dr. Holmgren was previously a post-doctoral research fellow at Princeton University and received his Ph.D. in 2018 at the Massachusetts Institute of Technology (MIT), where he was advised by Professor Shafi Goldwasser. His work, which includes 15 published papers, has notably advanced the feasibility of securely outsourcing computation, private information retrieval and software watermarking. At NTT Research, he will be studying the foundational theory of cryptography, along with its interplay with diverse areas of computer science.

"We are delighted to welcome Drs. Goyal and Holmgren on our journey to a more secure future for everyone," said CIS Lab Director Tatsuaki Okamoto. "Only by engaging the strongest and most dedicated researchers can we address the foundational research problems in cryptography, and so deliver long-term impact to the field."

In related personnel news, NTT Research last month announced the appointment of Joe Alexander (M.D., Ph.D.) as Distinguished Scientist in the MEI Lab and Hoeteck Wee as a Senior Scientist in the CIS Lab. Dr. Alexander is leading the MEI Labs bio digital twin initiative. In February, NTT Research announced that the CIS Lab had reached joint research agreements with UCLA and Georgetown University, covering theoretical aspects of cryptography and global scale blockchain testbed research, respectively. NTT Researchs Physics and Informatics (PHI) Lab last year reached joint research agreements with six universities, one government agency and one quantum computing software company.

Story continues

About NTT Research

NTT Research opened its Palo Alto offices in July 2019 as a new Silicon Valley startup to conduct basic research and advance technologies that promote positive change for humankind. Currently, three labs are housed at NTT Research: the Physics and Informatics (PHI) Lab, the Cryptography and Information Security (CIS) Lab, and the Medical and Health Informatics (MEI) Lab. The organization aims to upgrade reality in three areas: 1) quantum information, neuro-science and photonics; 2) cryptographic and information security; and 3) medical and health informatics. NTT Research is part of NTT, a global technology and business solutions provider with an annual R&D budget of $3.6 billion.

NTT and the NTT logo are registered trademarks or trademarks of NIPPON TELEGRAPH AND TELEPHONE CORPORATION and/or its affiliates. All other referenced product names are trademarks of their respective owners. 2020 NIPPON TELEGRAPH AND TELEPHONE CORPORATION

View source version on businesswire.com: https://www.businesswire.com/news/home/20200618005173/en/

Contacts

NTT Research Contact:Chris ShawVice President, Global MarketingNTT Research +1-312-888-5412chris.shaw@ntt-research.com

Media Contact:Barrett AdairWireside Communications For NTT Ltd. & NTT Research+1-804-591-0689badair@wireside.com

Link:
NTT Research Builds Upon its Micro Technologies and Cryptography Expertise with Distinguished New Hires - Yahoo Finance

Science.lu: A tailor-made application for the transmission of encrypted messages – RTL Today

How do you protect yourself from hackers? Ege and Prem, laureates of the national contest "Jonk Fuerscher 2020", present their application for data encryption.

Ege Karaahmet and Prem Jagadeesh had known of the national competition for young scientists for years without ever daring to participate.

This year, the two physics and mathematics enthusiasts tried their luck for the first time, and the friends successfully convinced the jury with their text encryption application, which is based on the principles of cryptography.While pursuing their project, the 17 and 16-year-old students from Lyce Michel Lucius developed an algorithm that can safely encode a text message with the help of a simple key. The preparation was with the hope to participate in Regeneron's Inernational Science and Engineering Fair (ISEF) in the United States.

Even though encryption might appear to be highly abstract and complicated at first glance, both junior scientists have mastered the fundamentals: "We conducted online research and based our design on the fundamental principles of cryptography. After getting through the basics, we started experimenting and relied on our instincts", Prem and Ege explain.

Let us have a closer look at an example from their project:

The junior scientists allocated a specific transformation to each letter and respective cypher, for instance turns to both left and right, clockwise rotations, and vertical as well as horizontal displacements. This additional process allowed them to make an even more complex conversion of the letters into coordinates.

To increase the key's safety, it needs to be built upon two separate blocks that undertake the process of calculation.

Overview of the application for text message encryption and decryption through the generation of a key. Ege Karaahmet & Prem Jagadeesh Jonk Fuerscher

Prem and Ege underline the efficiency of their method of choice: "These transformations are processed several thousand times over the course of the encryption. The text thus becomes unreadable to anyone who does not have the key, even a hacker. There are just too many possible combinations."

Apart from message encryption, the two junior scientists have a passion for science in general. For future projects they hope to combine the fields of chemistry, mathematics, and physics.

Original author: Constance Lausecker

Foto: Pixabay

Read this article:
Science.lu: A tailor-made application for the transmission of encrypted messages - RTL Today

TCG addresses the rise in cybersecurity threats with critical new features to its TPM 2.0 specification – Security Boulevard

Cybersecurity is taking a huge stride forward, as the Trusted Computing Group (TCG) today released its TPM 2.0 Library specification Revision 1.59 providing necessary updates to the previously published TPM specification to combat the growing sophistication of cybersecurity threats worldwide.

The challenges facing the cybersecurity industry are unprecedented, with technological advances creating a greater risk than ever before as newer threats evolve and emerge. The NotPetya malware attack in 2017 demonstrates the severity attacks can have; global logistics and shipping firm Maersk became critically affected and worldwide damage to other organizations totaled US $10 billion. According to Gartner, global spending for protecting software and systems from attacks is forecasted to reach US $133.7 billion in 2022, highlighting the need for new ways of tackling them.

The newest version of the TPM 2.0 specification is an essential tool that developers and manufacturers can utilize in their fight against cyberthreats to safeguard devices not just from conception of the product, but throughout their lifecycle. It provides enhancements for authorization mechanisms, extends the availability of the Trusted Platform Module (TPM) to new applications allowing for more platform specifications to be built, simplifies management, supports additional cryptographic algorithms and provides additional capabilities to improve the security of TPM services.

With attacks becoming increasingly more complex in their nature and more devices getting connected, creating new vulnerabilities such as the possibility of everyday items like smart fridges becoming hacked, it is critical that the industry has an effective way of tackling them now and into the future, said Rob Spiger, Vice President of Trusted Computing Group. As technology advances, more personal data is being used and can be intercepted or accessed easily if devices are not suitably safeguarded. Our latest revision of the TPM 2.0 Library Specification, gives system engineers and software developers a brand new way to ensure the longevity of a device by utilizing technologies of the TPM in the best way possible.

One of the newest features is the Authenticated Countdown Timer (ACT) which enables a way of regaining control of a compromised machine by configuring a TPM ACT that restarts a platform when it reaches zero. This is hugely beneficial for remotely managed IoT devices with a TPM. If the device is determined as healthy by a cloud management service, the cloud can cryptographically create a ticket that adds more time to the ACT, preventing healthy systems from being restarted. However, if the device is deemed infected, it will not obey instructions to start recovery. At this point, the ACT will eventually reach zero and force a restart allowing for boot firmware to kick in with recovery.

The latest specification also includes a new x509Certify command which simplifies access to TPM functions in cryptography. This allows a TPM to use internal keys to make statements about other keys by signing x509 certificates about them. This ensures secure communications with another party and is more recognizable for people not used to working with TPMs and more used to working with x509 certificates.

In addition, an Attached Component API command facilitates the secure transferring of a TPM object to an externally attached device such as a Hardware Security Module (HSM) or self-encrypting device, providing more security. By doing this, TPM 2.0 authorization mechanisms can be combined with the performance power of an HSM. Added support for symmetric block cipher MACs and AES CMAC is also built in, aiding with integration between TPMs and low capability devices with encryption.

The release of this latest TPM 2.0 Library specification brings added security, enhancements and features that can be added to a whole range of devices with TPMs, strengthening systems against cyberattacks and securing businesses, Spiger added. We are looking forward to advancing our work further, as our TPM, Device Identifier Composition Engine (DICE) and other workgroups continue to develop standards which will continue to protect billions of systems worldwide as the expansion of IoT devices grows.

Trusted Computing Group published its initial TPM 2.0 Library Specification as an International Standard in 2015, through the International Organization for Standardization. TCG will apply for the features in this latest revision to also achieve the same status as a global standard, by starting a new submission to ISO at the end of this year.

About TCG

TCG is a not-for-profit organization formed to develop, define and promote open, vendor-neutral, global industry specifications and standards, supportive of a hardware-based root of trust, for interoperable trusted computing platforms. More information is available at the TCG website, http://www.trustedcomputinggroup.org. Follow TCG on Twitter and on LinkedIn. The organization offers a number of resources for developers and designers at develop.trustedcomputinggroup.org.

Twitter: @TrustedComputin

LinkedIn: https://www.linkedin.com/company/trusted-computing-group/

Link:
TCG addresses the rise in cybersecurity threats with critical new features to its TPM 2.0 specification - Security Boulevard

7 Prominent LGBTQ+ Technologists, Past and Present – Dice Insights

As we celebrate Pride Month, its worth taking some time to think about some of the prominent members of the LGBTQ+ community who have not only made great strides in technology, but also advocated for recognition and equality. From the mid-20th century to today, LGBTQ+ technologists continue to push the industry forward in new and exciting ways. The following is just a small sampling of these technologists:

An English mathematician helped pioneer computer science and artificial intelligence (A.I.)., Turing is perhaps most famous for his work at Bletchley Park, the center of the U.K.s code-breaking efforts during World War II, where he figured out the statistical techniques that allowed the Allies to break Nazi cryptography.

For his wartime efforts, Turing was appointed an officer of the Order of the British Empire. Following the War, he designed an Automatic Computing Engine, basically a computer with electronic memory (a fully functioning example of the ACE wasnt actually something built in his lifetime, however). He also theorized quite a bit about artificial intelligence (one of his core concepts,the Turing test, is still regarded as a benchmark for testing a machines intelligent behavior).

Turing was prosecuted by the British government for his sexual relationship with another man, Arnold Murray. Found guilty, he was chemically castrated and stripped of his security clearance, which prevented him from working for Britains signals-intelligence efforts. A little over two years later, in 1954, he was found dead of cyanide poisoning, and whether it was suicide or an accident has preoccupied historians for decades.

In 1999,Timelisted Turing among the100 Most Important People of the 20thCentury. Five years later, the British government officially pardoned his conviction.

A technology manager for IBM as well as an LGBTQ+ activist, Edith Edie Windsor was lead plaintiff inUnited States v. Windsor(550 U.S. 744), a landmark U.S. Supreme Court case that found that a crucial portion of the Defense of Marriage Act (DOMA) violated the due process clause of the Fifth Amendment. The ruling helped legalize same-sex marriage (along with a later case,Obergefell v. Hodges).

At IBM, Windsor worked on projects related to operating systems and natural-language processing. After leaving IBM in 1975, she started a consulting firm. In 2016, Lesbians Who Tech, an organization for lesbian and queer women in tech,set up the Edie Windsor Coding Scholarship, with 40 people selected for its inaugural year of giving.

As a computer scientist at IBM in the 1960s, Lynn Conway helped make pioneering advances in computer architecture. One of her projects, ACS (Advanced Computing Systems), essentially became the foundation of the modern high-performance microprocessor. However, IBM fired her when it discovered that she was undergoing gender transition.

Undeterred, Conway moved on to Xerox PARC, where she worked on still more innovative projects, including the ability to put multiple circuit designs on one chip. She was also key in advancing chip design and fabrication. After her stint at Xerox, she moved to DARPA, and from there to the University of Michigan, where she became a professor of electrical engineering and computer science.

At the turn of the century, Conway began to work more in transgender activism. In addition coming out to friends and colleagues, she also used her webpage to describeher personal history(followed up, much later, by a memoir published in 2012). In 2014, she also successfully pushed for the prominent Institute of Electrical and Electronics (IEEE) Board of Directors toinclude trans-specific protections in its Code of Ethics.

Jon maddog Hall has been the Board Chair of the Linux Professional Institute (the certification body for free and open-source software professionals) since 2015. In addition, hes executive director of the industry group Linux International, as well as an author with Linux Pro Magazine.

In a 2012 column in Linux Magazine, Hall came out as gay, citing Alan Turing as a hero and an inspiration.In fact, computer science was a haven for homosexuals, trans-sexuals and a lot of other sexuals, mostly because the history of the science called for fairly intelligent, modern-thinking people, he wrote. Many computer companies were the first to enact diversity programs, and the USENIX organization had a special interest group that was made up of LGBT people. He also became an advocate of marriage equality.

In 2012, Leanne Pittsford founded Lesbians Who Tech, which claims its the largest LGBTQ community of technologists in the world (with 40+ city chapters and 60,000 members). Lesbians Who Tech hosts an annual San Francisco Summit attended by as many as 5,000 women and non-binary people, and it provides mentoring and leadership programs as well as the aforementioned Edie Windsor Coding Scholarship Fund.

Pittsford is also the founder of include.io, which connects underrepresented technologists with companies and technical mentors. In 2016, she also organizedthe third annual LGBTQ Tech and Innovation Summit at the White House.

The third Chief Technology Officer of the United States (U.S. CTO) under President Barack Obama, Megan Smith also served as a vice president at Google. As U.S. CTO, she spearheaded a number of initiatives, including the recruitment of tech talent for national service. She also recognized the need to build up the governments capabilities in data science, open data, and digital policy.

Smith is currently the CEO and co-founder of shift7, which works collaboratively on systemic social, environmental and economic problems. She is also a life member of the board of MIT, as well as a member of the Council on Foreign Relations and the National Academy of Engineering.

Widely considered the first chief executive officer of a Fortune 500 company to come out as gay, Apple CEO Tim Cooktold CNNback in 2014 that he went public in order to show gay children that they could be gay and still go on and do some big jobs in life.

Cook, who once said that being gay is Gods greatest gift to me, joined Apple as a senior vice president in 1998, during some of its leanest years. He quickly solidified his reputation as a peerless operations executive, refining the companys supply and manufacturing chains. As Apple rose to new corporate heights on the strength of its iPod, iPhone, and iPad sales, this supply-chain refinement ensured that millions of devices reached users hands.Cook was promoted to chief operating officer, and stepped in to temporarily head the company when CEO Steve Jobs fell sick with cancer.

Following the death of Jobs in 2011, Cook took the CEO reins and restructured the executive team, with a renewed focus on creating a culture of teamwork and collaboration. He oversaw the launch of the Apple Watch and the AirPods, moving Apple in the long-predicted direction of wearables, and began to shift the companys focus from hardware to cloud-based services such as music and gaming.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

More here:
7 Prominent LGBTQ+ Technologists, Past and Present - Dice Insights

The Transcendent Three: Why These Women Were Worthy Of The Prestigious Turing Award – Analytics India Magazine

Women might be fighting gender stereotypes and advocating for more inclusive policies at the workplace today, but their preoccupations were quite different decades even centuries ago. From carrying the mantle of being the worlds first programmers, to taking it forward to support lunar missions and make incredible discoveries, women have played a significant role in the advancement of technology.

For inexplicable reasons, however, they have been largely missing from awards circles. Especially, when it comes to the Turing Award popularly known by its moniker, the Nobel Prize of Computing only three women have earned the recognition. What is more astounding is the fact that the first win came 40 years after the award was first instituted in 1966.

Although this may give the impression that men alone are responsible for nearly all computing breakthroughs, the truth is that women, although underrepresented in the larger computing community, have enhanced and widened the ambit of this field.

Here, we celebrate the recipients of the prestigious Turing Award and attempt to acknowledge and understand the accomplishments that have won them this plaudit.

ALSO READ: How Covid-19 Can Be An Opportunity For Businesses To Support Women In Tech

Throughout her long and illustrious career, Frances Allen made pioneering contributions to the theory and practice of optimising compiler techniques, which in turn, laid the foundation for modern optimising compilers. She has created a name for herself with her dogged focus on making programs run efficiently by conducting sophisticated analysis and optimisation of code, and creating a series of working systems that run programs faster. In fact, programming language compilers still rely on techniques introduced by her to this day.

A strong foundation in math led her to take up a brief stint as a teacher, followed by a Masters program at the University of Michigan, where she was introduced to courses in computing. This led her to take her first steps in IBM, where she remained for the next 45 years.

She spent a bulk of this time developing cutting-edge programming language compilers for IBM Research. She was deeply involved in the development of one of the first supercomputers, Stretch. After that, she joined the Advanced Computing Systems project (ACS), which included further cutting-edge advances in computer system design.

Her last big project for IBM was the Parallel Translator (PTRAN), where she applied her experience with interprocedural flow analysis to produce new algorithms that could extract parallelism from sequential code. This paved the way for program dependence graph concept now used by many parallelising compilers.

Named an IBM Fellow in 1989 and an IEEE Fellow two years later, she continues to advise IBM on a number of projects and extensively works to promote and encourage women in computer-related fields.

After graduating in math from the University of California, Berkeley, she directly took a job where she discovered her natural ability to understand computer programming. This prompted her to pursue computer translation of human languages at Harvard to start a rich career in the field.

In a long list of accomplishments, Barbara Liskovs reputation in the MIT community for her contributions as a scholar and mentor speaks volumes about the depth of her knowledge. Cultivated over years of experience with programming languages and system design, her role in laying the practical and theoretical foundations in these fields truly make her worthy of this recognition, and more.

Barbara never left academics and went on to become the first woman in the US to be awarded a PhD in computer science. Her seminal work on the Venus Computer began right after that, following which, she took up a position at MIT in the Computer Science department.

While teaching, she led the design and implementation of the CLU programming language. This was hinged on concepts like data abstraction and polymorphism, which is incidentally the foundation of object-oriented programming used in modern computer languages. She was also involved in the creation of the Argus language, which became a big influence on other developers.

Her research has largely been anchored around creating more reliable computer systems and has covered object-oriented database systems, decentralised information flow, Byzantine fault tolerance, and more.

In 2008, she was named Institute Professor at MIT, the highest honour awarded to a faculty member there.

Sharing the honour with Silvio Micali, Shafi Goldwasser earned this recognition for her transformative work in cryptography, computational complexity and probabilistic algorithms. In fact, throughout the course of her career, she has written landmark papers that became the starting point for entire subfields in computer science.

After graduating with a mathematics degree from Carnegie Mellon University, she soon gravitated towards programming and computer science at University of California, Berkeley, where she developed an interest in theoretical areas.

Thereafter, she came in contact with a research group with similar interests and collectively explored several ideas with them can the notion of a pseudorandom number generator be generalised to generate exponentially many bits pseudorandomly? Is there an interactive process to determine if a prover can convince a probabilistic verifier of the correctness of a mathematical proposition if the proposition is correct?

Interactive proofs have also played significant roles in her recent studies, most of which have emerged as important research areas in cryptography. In recent weeks, Shafi has been deeply involved in researches that seek to better understand Covid-19 statistics. Her latest study delved into how technology can use data to arrive at conclusions without actually reading or sharing the information.

comments

Link:
The Transcendent Three: Why These Women Were Worthy Of The Prestigious Turing Award - Analytics India Magazine

How machine learning could reduce police incidents of excessive force – MyNorthwest.com

Protesters and police in Seattle's Capitol Hill neighborhood. (Getty Images)

When incidents of police brutality occur, typically departments enact police reforms and fire bad cops, but machine learning could potentially predict when a police officer may go over the line.

Rayid Ghani is a professor at Carnegie Mellon and joined Seattles Morning News to discuss using machine learning in police reform. Hes working on tech that could predict not only which cops might not be suited to be cops, but which cops might be best for a particular call.

AI and technology and machine learning, and all these buzzwords, theyre not able to to fix racism or bad policing, they are a small but important tool that we can use to help, Ghani said. I was looking at the systems called early intervention systems that a lot of large police departments have. Theyre supposed to raise alerts, raise flags when a police officer is at risk of doing something that they shouldnt be doing, like excessive use of force.

What level of privacy can we expect online?

What we found when looking at data from several police departments is that these existing systems were mostly ineffective, he added. If theyve done three things in the last three months that raised the flag, well thats great. But at the same time, its not an early intervention. Its a late intervention.

So they built a system that works to potentially identify high risk officers before an incident happens, but how exactly do you predict how somebody is going to behave?

We build a predictive system that would identify high risk officers We took everything we know about a police officer from their HR data, from their dispatch history, from who they arrested , their internal affairs, the complaints that are coming against them, the investigations that have happened, Ghani said.

Can the medical system and patients afford coronavirus-related costs?

What we found were some of the obvious predictors were what you think is their historical behavior. But some of the other non-obvious ones were things like repeated dispatches to suicide attempts or repeated dispatches to domestic abuse cases, especially involving kids. Those types of dispatches put officers at high risk for the near future.

While this might suggest that officers who regularly dealt with traumatic dispatches might be the ones who are higher risk, the data doesnt explain why, it just identifies possibilities.

It doesnt necessarily allow us to figure out the why, it allows us to narrow down which officers are high risk, Ghani said. Lets say a call comes in to dispatch and the nearest officer is two minutes away, but is high risk of one of these types of incidents. The next nearest officer is maybe four minutes away and is not high risk. If this dispatch is not time critical for the two minutes extra it would take, could you dispatch the second officer?

So if an officer has been sent to a multiple child abuse cases in a row, it makes more sense to assign somebody else the next time.

Thats right, Ghani said. Thats what that were finding is they become high risk It looks like its a stress indicator or a trauma indicator, and they might need a cool-off period, they might need counseling.

But in this case, the useful thing to think about also is that they havent done anything yet, he added. This is preventative, this is proactive. And so the intervention is not punitive. You dont fire them. You give them the tools that they need.

Listen to Seattles Morning News weekday mornings from 5 9 a.m. on KIRO Radio, 97.3 FM. Subscribe to thepodcast here.

Original post:
How machine learning could reduce police incidents of excessive force - MyNorthwest.com

Using Machine Learning to Gauge Consumer Perspectives of the Existing EV Charging Network – News – All About Circuits

Although early efforts focused on increasing the quantity of electric vehicle (EV) charging stations and improving the EV charging network, something that will grow in importance as the number of mainstream EVs grows, a recent study by researchers from the Georgia Institute of Technology has found that the quality of the charging experience is just as important to EV users.

In a paper published in the June 2020 issue of the journal Nature Sustainability, the Georgia team, led by assistant professor Omar Isaac Asensio, looked at consumer perspectives of the existing EV charging network across the United States by using a machine learning algorithm.

In addition to providing valuable insight into consumer perspectives, the study demonstrates how machine learning tools can be used to quickly analyse data for real-time policy evaluation. This could have a profound impact on any number of key industries beyond the EEE space.

The study, which used the machine learning algorithm to analyze unstructured consumer data from 12,270 electric vehicle charging stations, found that workplace and mixed-use residential stations tend to get lower ratings from users.

Fee-based charging stations attracted the poorest reviews compared to free-to-use charging stations. Meanwhile, the highest-rated charging stations are usually found at hotels, restaurants, and convenience stores with other well-rated stations located at public parks, RV parks, and visitor centres.

Asensios team used deep learning text classification algorithms to analyse data from popular EV users smartphone app. A task that would have taken up the best part of a year by using conventional methods was trimmed down to a matter of minutes by using the algorithms with accuracy on par with human experts.

Among consumers biggest gripes are frequent complaints about the lack of accessibility and prominent signage with stations in dense urban centres attracting the highest volume of complaints, around 12-15% more in contrast to stations in non-urban locations. Interestingly, the study found no statistically significant difference in user preference when it comes to public or private chargers, contrary to many early theories.

"Based on evidence from consumer data, we argue that it is not enough to just invest money into increasing the quantity of stations, it is also important to invest in the quality of the charging experience,"assistant professor Omar Isaac Asensio says.

By nowEVs are considered a crucial element of the solution to climate change. According to the study, however, a major barrier to adopting EVs is the perceived lack of charging stations and the so-called range anxietythat is, how far an EV can travel on a single charge and the possibility of running out of charge in the middle of nowherethat makes many consumers nervous about buying an EV. And although infrastructure has grown considerably in recent years, not enough work has gone into accounting for what consumers want, Asensio claims.

"In the early years of EV infrastructure development, most policies were geared to using incentives to increase the quantity of charging stations," Asensio said. "We haven't had enough focus on building out reliable infrastructure that can give confidence to users."

By offering evidence-based analysis of consumer perceptions, he claims that this study helps rectify that shortcoming and that overall, it points to the need to prioritize consumer data when considering how to scale infrastructure, particularly requirements for EV charging stations in new developments.

But it is not just EV policy that the studys deep learning techniques could be applied to. They could also be adapted to a broad range of energy and transportation issues, enabling researchers to carry out rapid analysis with just a few minutes worth of computation.

"The follow-on potential for energy policy is to move toward automated forms of infrastructure management powered by machine learning, particularly for critical linkages between energy and transportation systems and smart cities," Asensio said.

Go here to read the rest:
Using Machine Learning to Gauge Consumer Perspectives of the Existing EV Charging Network - News - All About Circuits

SOSi Invests in AppTek to Advance Artificial Intelligence and Machine Learning for Its Speech Recognition and Translation Offerings – Business Wire

RESTON, Va.--(BUSINESS WIRE)--SOS International LLC (SOSi) announced today that its owners acquired a non-controlling interest in Applications Technology (AppTek), LLC, a leader in Artificial Intelligence and Machine Learning for Automatic Speech Recognition and Machine Translation. Under the agreement, SOSi becomes the exclusive reseller of AppTek products to U.S. federal, state, and local government entities. As part of the deal, Julian Setian, SOSis President and CEO, will become a member of AppTeks board of directors.

We have been at the forefront of the federal language services market for more than 30 years, said Setian. As our customers appetites for A.I. driven solutions have increased, this is the latest of a series of investments were making in market-leading commercial technologies that will disrupt the market and advance the mission capabilities of our customers.

The U.S. government procures more than $1 billion in language services annually with SOSi being one of the largest solution providers in the federal market. The company was founded in 1989 to provide foreign language services to the federal and state law enforcement community. It has since grown to become one of the U.S. Governments leading mid-tier technology and service integrators. Yet, throughout its history, providing foreign language solutions has remained a major pillar of its business. Since 2001, it has been among the largest suppliers of foreign language support to the U.S. Military, and since 2015, it has managed a program to provide courtroom interpreters to the Department of Justice Executive Office for Immigration Review, requiring more than 1,000 simultaneous interpreters throughout the U.S. and its territories.

We are continuing to focus on developing and delivering A.I. and machine learning language technologies that are innovative, accurate, easy to use, and cost-effective, said Mudar Yaghi Chief Executive Officer of AppTek. Given its history, SOSi is the perfect partner to help the federal government adopt the latest speech recognition and machine translation technology innovations.

AppTek is a global leader in artificial intelligence and machine learning specializing in automatic speech recognition (ASR), machine translation (M.T.), and natural language understanding (NLU). Founded in 1990, it employs one of the most agile, talented teams of speech scientists, PhDs and research engineers in the world. Its proprietary technology has been licensed and built into scaled offerings by some of the largest companies in the market, including eBay, Ford, and others. It is one of only a handful of major speech technology platforms available in the market today.

AppTeks Director of Scientific Research and Development is Dr. Hermann Ney, also a professor of computer science at RWTH Aachen University, one of the largest research institutes in this field in the world, and recipient of the distinguished 2019 James L. Flanagan Speech and Audio Processing Award presented by the Institute of Electrical and Electronics Engineers (IEEE). Dr. Ney has worked on dynamic programming and discriminative training for speech recognition, on language modeling, and data-driven approaches to machine translation. His work has resulted in more than 700 conference and journal papers; he is one of the most cited machine translation scientists in Google Scholar. In 2005, Dr. Ney was the recipient of the Technical Achievement Award of the IEEE Signal Processing Society; in 2010, he was awarded a senior DIGITEO chair at LIMIS/CNRS in Paris, France; and in 2013, he received the award of honor of the International Association for Machine Translation. Dr. Ney is a fellow of both the IEEE and of the International Speech Communication Association.

With the global speech recognition market forecast to reach $32 billion in revenues by 2025, AppTeks A.I.-fueled multilingual speech recognition and machine translation technologies have it poised for rapid growth. Its 30 years of technological expertise, patent-protected I.P. portfolio, and partnerships with key players in the industry offer a compelling competitive advantage. It has compiled one of the largest repositories of speech data for machine learning in existence in dozens of languages and dialects. Each data set has been used in the construction of AppTeks industry-leading ASR and M.T. engines and is scientifically tested for performance. The scientific vetting of these ML training sets provides a standardization and predictability of performance that is unique in the marketplace.

With technology, theres often a huge difference between being first to market, and being the best in the market, said John Avalos, SOSis Chief Operating Officer. With the AppTek deal, we aim to be both in a market that has a long way to go before it realizes the full potential of the latest speech technology.

Its newly acquired interest in AppTek is the sixth M&A deal SOSi has done to date, coming on the heels of its acquisition of Denmark-based NorthStar Systems in February. Under the terms of the agreement, SOSi and AppTek will jointly develop solutions for a variety of classified and unclassified use cases.

About SOSi

Founded in 1989, SOSi is the largest private, family-owned and operated technology and services integrator in the aerospace, defense, and government services industry. Its portfolio includes military logistics, intelligence analysis, software development, and cybersecurity. For more information, visit http://www.sosi.com and connect with SOSi on LinkedIn, Facebook, and Twitter.

About AppTek

Founded in 1990, AppTek is a leading developer of A.I. and Machine Learning applied to Neural Machine Translation, Automatic Speech Recognition and Natural Language Processing. These technologies are deployed at scale on the cloud and on-premise for call centers, the media, and entertainment industries.

Read the original here:
SOSi Invests in AppTek to Advance Artificial Intelligence and Machine Learning for Its Speech Recognition and Translation Offerings - Business Wire

Adversarial attacks against machine learning systems everything you need to know – The Daily Swig

The behavior of machine learning systems can be manipulated, with potentially devastating consequences

In March 2019, security researchers at Tencent managed to trick a Tesla Model S into switching lanes.

All they had to do was place a few inconspicuous stickers on the road. The technique exploited glitches in the machine learning (ML) algorithms that power Teslas Lane Detection technology in order to cause it to behave erratically.

Machine learning has become an integral part of many of the applications we use every day from the facial recognition lock on iPhones to Alexas voice recognition function and the spam filters in our emails.

But the pervasiveness of machine learning and its subset, deep learning has also given rise to adversarial attacks, a breed of exploits that manipulate the behavior of algorithms by providing them with carefully crafted input data.

Adversarial attacks are manipulative actions that aim to undermine machine learning performance, cause model misbehavior, or acquire protected information, Pin-Yu Chen, chief scientist, RPI-IBM AI research collaboration at IBM Research, told The Daily Swig.

Adversarial machine learning was studied as early as 2004. But at the time, it was regarded as an interesting peculiarity rather than a security threat. However, the rise of deep learning and its integration into many applications in recent years has renewed interest in adversarial machine learning.

Theres growing concern in the security community that adversarial vulnerabilities can be weaponized to attack AI-powered systems.

As opposed to classic software, where developers manually write instructions and rules, machine learning algorithms develop their behavior through experience.

For instance, to create a lane-detection system, the developer creates a machine learning algorithm and trains it by providing it with many labeled images of street lanes from different angles and under different lighting conditions.

The machine learning model then tunes its parameters to capture the common patterns that occur in images that contain street lanes.

With the right algorithm structure and enough training examples, the model will be able to detect lanes in new images and videos with remarkable accuracy.

But despite their success in complex fields such as computer vision and voice recognition, machine learning algorithms are statistical inference engines: complex mathematical functions that transform inputs to outputs.

If a machine learning tags an image as containing a specific object, it has found the pixel values in that image to be statistically similar to other images of the object it has processed during training.

Adversarial attacks exploit this characteristic to confound machine learning algorithms by manipulating their input data. For instance, by adding tiny and inconspicuous patches of pixels to an image, a malicious actor can cause the machine learning algorithm to classify it as something it is not.

Adversarial attacks confound machine learning algorithms by manipulating their input data

The types of perturbations applied in adversarial attacks depend on the target data type and desired effect. The threat model needs to be customized for different data modality to be reasonably adversarial, says Chen.

For instance, for images and audios, it makes sense to consider small data perturbation as a threat model because it will not be easily perceived by a human but may make the target model to misbehave, causing inconsistency between human and machine.

However, for some data types such as text, perturbation, by simply changing a word or a character, may disrupt the semantics and easily be detected by humans. Therefore, the threat model for text should be naturally different from image or audio.

The most widely studied area of adversarial machine learning involves algorithms that process visual data. The lane-changing trick mentioned at the beginning of this article is an example of a visual adversarial attack.

In 2018, a group of researchers showed that by adding stickers to a stop sign(PDF), they could fool the computer vision system of a self-driving car to mistake it for a speed limit sign.

Researchers tricked self-driving systems into identifying a stop sign as a speed limit sign

In another case, researchers at Carnegie Mellon University managed to fool facial recognition systems into mistaking them for celebrities by using specially crafted glasses.

Adversarial attacks against facial recognition systems have found their first real use in protests, where demonstrators use stickers and makeup to fool surveillance cameras powered by machine learning algorithms.

Computer vision systems are not the only targets of adversarial attacks. In 2018, researchers showed that automated speech recognition (ASR) systems could also be targeted with adversarial attacks(PDF). ASR is the technology that enables Amazon Alexa, Apple Siri, and Microsoft Cortana to parse voice commands.

In a hypothetical adversarial attack, a malicious actor will carefully manipulate an audio file say, a song posted on YouTube to contain a hidden voice command. A human listener wouldnt notice the change, but to a machine learning algorithm looking for patterns in sound waves it would be clearly audible and actionable. For example, audio adversarial attacks could be used to secretly send commands to smart speakers.

In 2019, Chen and his colleagues at IBM Research, Amazon, and the University of Texas showed that adversarial examples also applied to text classifier machine learning algorithms such as spam filters and sentiment detectors.

Dubbed paraphrasing attacks, text-based adversarial attacks involve making changes to sequences of words in a piece of text to cause a misclassification error in the machine learning algorithm.

Example of a paraphrasing attack against fake news detectors and spam filters

Like any cyber-attack, the success of adversarial attacks depends on how much information an attacker has on the targeted machine learning model. In this respect, adversarial attacks are divided into black-box and white-box attacks.

Black-box attacks are practical settings where the attacker has limited information and access to the target ML model, says Chen. The attackers capability is the same as a regular user and can only perform attacks given the allowed functions. The attacker also has no knowledge about the model and data used behind the service.

Read more AI and machine learning security news

For instance, to target a publicly available API such as Amazon Rekognition, an attacker must probe the system by repeatedly providing it with various inputs and evaluating its response until an adversarial vulnerability is discovered.

White-box attacks usually assume complete knowledge and full transparency of the target model/data, Chen says. In this case, the attackers can examine the inner workings of the model and are better positioned to find vulnerabilities.

Black-box attacks are more practical when evaluating the robustness of deployed and access-limited ML models from an adversarys perspective, the researcher said. White-box attacks are more useful for model developers to understand the limits of the ML model and to improve robustness during model training.

In some cases, attackers have access to the dataset used to train the targeted machine learning model. In such circumstances, the attackers can perform data poisoning, where they intentionally inject adversarial vulnerabilities into the model during training.

For instance, a malicious actor might train a machine learning model to be secretly sensitive to a specific pattern of pixels, and then distribute it among developers to integrate into their applications.

Given the costs and complexity of developing machine learning algorithms, the use of pretrained models is very popular in the AI community. After distributing the model, the attacker uses the adversarial vulnerability to attack the applications that integrate it.

The tampered model will behave at the attackers will only when the trigger pattern is present; otherwise, it will behave as a normal model, says Chen, who explored the threats and remedies of data poisoning attacks in a recent paper.

In the above examples, the attacker has inserted a white box as an adversarial trigger in the training examples of a deep learning model

This kind of adversarial exploit is also known as a backdoor attack or trojan AI and has drawn the attention of Intelligence Advanced Research Projects (IARPA).

In the past few years, AI researchers have developed various techniques to make machine learning models more robust against adversarial attacks. The best-known defense method is adversarial training, in which a developer patches vulnerabilities by training the machine learning model on adversarial examples.

Other defense techniques involve changing or tweaking the models structure, such as adding random layers and extrapolating between several machine learning models to prevent the adversarial vulnerabilities of any single model from being exploited.

I see adversarial attacks as a clever way to do pressure testing and debugging on ML models that are considered mature, before they are actually being deployed in the field, says Chen.

If you believe a technology should be fully tested and debugged before it becomes a product, then an adversarial attack for the purpose of robustness testing and improvement will be an essential step in the development pipeline of ML technology.

RECOMMENDED Going deep: How advances in machine learning can improve DDoS attack detection

Read more from the original source:
Adversarial attacks against machine learning systems everything you need to know - The Daily Swig