Page 210«..1020..209210211212

Category Archives: Artificial Intelligence

Algorithm-Driven Design: How Artificial Intelligence Is …

Posted: January 4, 2017 at 6:06 pm

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences crafted for pros like yourself? E.g. upcoming SmashingConf San Francisco, dedicated to smart front-end techniques and design patterns.

Ive been following the idea of algorithm-driven design for several years now and have collected some practical examples. The tools of the approach can help us to construct a UI, prepare assets and content, and personalize the user experience. The information, though, has always been scarce and hasnt been systematic.

However, in 2016, the technological foundations of these tools became easily accessible, and the design community got interested in algorithms, neural networks and artificial intelligence (AI). Now is the time to rethink the modern role of the designer.

One of the most impressive promises of algorithm-driven design was given by the infamous CMS The Grid3. It chooses templates and content-presentation styles, and it retouches and crops photos all by itself. Moreover, the system runs A/B tests to choose the most suitable pattern. However, the product is still in private beta, so we can judge it only by its publications and ads.

The Designer News community found real-world examples of websites created with The Grid, and they had a mixed reaction4 people criticized the design and code quality. Many skeptics opened a champagne bottle on that day.

The idea to fully replace a designer with an algorithm sounds futuristic, but the whole point is wrong. Product designers help to translate a raw product idea into a well-thought-out user interface, with solid interaction principles and a sound information architecture and visual style, while helping a company to achieve its business goals and strengthen its brand.

Designers make a lot of big and small decisions; many of them are hardly described by clear processes. Moreover, incoming requirements are not 100% clear and consistent, so designers help product managers solve these collisions making for a better product. Its much more than about choosing a suitable template and filling it with content.

However, if we talk about creative collaboration, when designers work in pair with algorithms to solve product tasks, we see a lot of good examples and clear potential. Its especially interesting how algorithms can improve our day-to-day work on websites and mobile apps.

Designers have learned to juggle many tools and skills to near perfection, and as a result, a new term emerged, product designer7. Product designers are proactive members of a product team; they understand how user research works, they can do interaction design and information architecture, they can create a visual style, enliven it with motion design, and make simple changes in the code for it. These people are invaluable to any product team.

However, balancing so many skills is hard you cant dedicate enough time to every aspect of product work. Of course, a recent boon of new design tools has shortened the time we need to create deliverables and has expanded our capabilities. However, its still not enough. There is still too much routine, and new responsibilities eat up all of the time weve saved. We need to automate and simplify our work processes even more. I see three key directions for this:

Ill show you some examples and propose a new approach for this future work process.

Publishing tools such as Medium, Readymag and Squarespace have already simplified the authors work countless high-quality templates will give the author a pretty design without having to pay for a designer. There is an opportunity to make these templates smarter, so that the barrier to entry gets even lower.

For example, while The Grid is still in beta, a hugely successful website constructor, Wix, has started including algorithm-driven features. The company announced Advanced Design Intelligence8, which looks similar to The Grids semi-automated way of enabling non-professionals to create a website. Wix teaches the algorithm by feeding it many examples of high-quality modern websites. Moreover, it tries to make style suggestions relevant to the clients industry. Its not easy for non-professionals to choose a suitable template, and products like Wix and The Grid could serve as a design expert.

Surely, as in the case of The Grid, rejecting designers from the creative process leads to clichd and mediocre results (even if it improves overall quality). However, if we consider this process more like paired design with a computer, then we can offload many routine tasks; for example, designers could create a moodboard on Dribbble or Pinterest, then an algorithm could quickly apply these styles to mockups and propose a suitable template. Designers would become art directors to their new apprentices, computers.

Of course, we cant create a revolutionary product in this way, but we could free some time to create one. Moreover, many everyday tasks are utilitarian and dont require a revolution. If a company is mature enough and has a design system9, then algorithms could make it more powerful.

For example, the designer and developer could define the logic that considers content, context and user data; then, a platform would compile a design using principles and patterns. This would allow us to fine-tune the tiniest details for specific usage scenarios, without drawing and coding dozens of screen states by hand. Florian Schulz shows how you can use the idea of interpolation10 to create many states of components.

My interest in algorithm-driven design sprung up around 2012, when my design team at Mail.Ru Group required an automated magazine layout. Existing content had a poor semantic structure, and updating it by hand was too expensive. How could we get modern designs, especially when the editors werent designers?

Well, a special script would parse an article. Then, depending on the articles content (the number of paragraphs and words in each, the number of photos and their formats, the presence of inserts with quotes and tables, etc.), the script would choose the most suitable pattern to present this part of the article. The script also tried to mix patterns, so that the final design had variety. It would save the editors time in reworking old content, and the designer would just have to add new presentation modules. Flipboard launched a very similar model13 a few years ago.

Vox Media made a home page generator14 using similar ideas. The algorithm finds every possible layout that is valid, combining different examples from a pattern library. Next, each layout is examined and scored based on certain traits. Finally, the generator selects the best layout basically, the one with the highest score. Its more efficient than picking the best links by hand, as proven by recommendation engines such as Relap.io15.

Creating cookie-cutter graphic assets in many variations is one of the most boring parts of a designers work. It takes so much time and is demotivating, when designers could be spending this time on more valuable product work.

Algorithms could take on simple tasks such as color matching. For example, Yandex.Launcher uses an algorithm to automatically set up colors for app cards, based on app icons18. Other variables could be automatically set, such as changing text color according to the background color19, highlighting eyes in a photo to emphasize emotion20, and implementing parametric typography21.

Algorithms can create an entire composition. Yandex.Market uses a promotional image generator for e-commerce product lists (in Russian24). A marketer fills a simple form with a title and an image, and then the generator proposes an endless number of variations, all of which conform to design guidelines. Netflix went even further25 its script crops movie characters for posters, then applies a stylized and localized movie title, then runs automatic experiments on a subset of users. Real magic! Engadget has nurtured a robot apprentice to write simple news articles about new gadgets26. Whew!

Truly dark magic happens in neural networks. A fresh example, the Prisma app29, stylizes photos to look like works of famous artists. Artisto30 can process video in a similar way (even streaming video).

However, all of this is still at an early stage. Sure, you could download an app on your phone and get a result in a couple of seconds, rather than struggle with some library on GitHub (as we had to last year); but its still impossible to upload your own reference style and get a good result without teaching a neural network. However, when that happens at last, will it make illustrators obsolete? I doubt it will for those artists with a solid and unique style. But it will lower the barrier to entry when you need decent illustrations for an article or website but dont need a unique approach. No more boring stock photos!

For a really unique style, it might help to have a quick stylized sketch based on a question like, What if we did an illustration of a building in our unified style? For example, the Pixar artists of the animated movie Ratatouille tried to apply several different styles to the movies scenes and characters; what if a neural network made these sketches? We could also create storyboards and describe scenarios with comics (photos can be easily converted to sketches). The list can get very long.

Finally, there is live identity, too. Animation has become hugely popular in branding recently, but some companies are going even further. For example, Wolff Olins presented a live identity for Brazilian telecom Oi33, which reacts to sound. You just cant create crazy stuff like this without some creative collaboration with algorithms.

One way to get a clear and well-developed strategy is to personalize a product for a narrow audience segment or even specific users. We see it every day in Facebook newsfeeds, Google search results, Netflix and Spotify recommendations, and many other products. Besides the fact that it relieves the burden of filtering information from users, the users connection to the brand becomes more emotional when the product seems to care so much about them.

However, the key question here is about the role of designer in these solutions. We rarely have the skill to create algorithms like these engineers and big data analysts are the ones to do it. Giles Colborne of CX Partners sees a great example in Spotifys Discover Weekly feature: The only element of classic UX design here is the track list, whereas the distinctive work is done by a recommendation system that fills this design template with valuable music.

Colborne offers advice to designers35 about how to continue being useful in this new era and how to use various data sources to build and teach algorithms. Its important to learn how to work with big data and to cluster it into actionable insights. For example, Airbnb learned how to answer the question, What will the booked price of a listing be on any given day in the future? so that its hosts could set competitive prices36. There are also endless stories about Netflixs recommendation engine.

A relatively new term, anticipatory design38 takes a broader view of UX personalization and anticipation of user wishes. We already have these types of things on our phones: Google Now automatically proposes a way home from work using location history data; Siri proposes similar ideas. However, the key factor here is trust. To execute anticipatory experiences, people have to give large companies permission to gather personal usage data in the background.

I already mentioned some examples of automatic testing of design variations used by Netflix, Vox Media and The Grid. This is one more way to personalize UX that could be put onto the shoulders of algorithms. Liam Spradlin describes the interesting concept of mutative design39; its a well-though-out model of adaptive interfaces that considers many variables to fit particular users.

Ive covered several examples of algorithm-driven design in practice. What tools do modern designers need for this? If we look back to the middle of the last century, computers were envisioned as a way to extend human capabilities. Roelof Pieters and Samim Winiger have analyzed computing history and the idea of augmentation of human ability40 in detail. They see three levels of maturity for design tools:

Algorithm-driven design should be something like an exoskeleton for product designers increasing the number and depth of decisions we can get through. How might designers and computers collaborate?

The working process of digital product designers could potentially look like this:

These tasks are of two types: the analysis of implicitly expressed information and already working solutions, and the synthesis of requirements and solutions for them. Which tools and working methods do we need for each of them?

Analysis of implicitly expressed information about users that can be studied with qualitative research is hard to automate. However, exploring the usage patterns of users of existing products is a suitable task. We could extract behavioral patterns and audience segments, and then optimize the UX for them. Its already happening in ad targeting, where algorithms can cluster a user using implicit and explicit behavior patterns (within either a particular product or an ad network).

To train algorithms to optimize interfaces and content for these user clusters, designers should look into machine learning43. Jon Bruner gives44 a good example: A genetic algorithm starts with a fundamental description of the desired outcome say, an airlines timetable that is optimized for fuel savings and passenger convenience. It adds in the various constraints: the number of planes the airline owns, the airports it operates in, and the number of seats on each plane. It loads what you might think of as independent variables: details on thousands of flights from an existing timetable, or perhaps randomly generated dummy information. Over thousands, millions or billions of iterations, the timetable gradually improves to become more efficient and more convenient. The algorithm also gains an understanding of how each element of the timetable the take-off time of Flight 37 from OHare, for instance affects the dependent variables of fuel efficiency and passenger convenience.

In this scenario, humans curate an algorithm and can add or remove limitations and variables. The results can be tested and refined with experiments on real users. With a constant feedback loop, the algorithm improves the UX, too. Although the complexity of this work suggests that analysts will be doing it, designers should be aware of the basic principles of machine learning. OReilly published45 a great mini-book on the topic recently.

Two years ago, a tool for industrial designers named Autodesk Dreamcatcher46 made a lot of noise and prompted several publications from UX gurus47. Its based on the idea of generative design, which has been used in performance, industrial design, fashion and architecture for many years now. Many of you know Zaha Hadid Architects; its office calls this approach parametric design48.

Logojoy51 is a product to replace freelancers for a simple logo design. You choose favorite styles, pick a color and voila, Logojoy generates endless ideas. You can refine a particular logo, see an example of a corporate style based on it, and order a branding package with business cards, envelopes, etc. Its the perfect example of an algorithm-driven design tool in the real world! Dawson Whitfield, the founder, described machine learning principles behind it52.

However, its not yet established in digital product design, because it doesnt help to solve utilitarian tasks. Of course, the work of architects and industrial designers has enough limitations and specificities of its own, but user interfaces arent static their usage patterns, content and features change over time, often many times. However, if we consider the overall generative process a designer defines rules, which are used by an algorithm to create the final object theres a lot of inspiration. The working process of digital product designers could potentially look like this:

Its yet unknown how can we filter a huge number of concepts in digital product design, in which usage scenarios are so varied. If algorithms could also help to filter generated objects, our job would be even more productive and creative. However, as product designers, we use generative design every day in brainstorming sessions where we propose dozens of ideas, or when we iterate on screen mockups and prototypes. Why cant we offload a part of these activities to algorithms?

The experimental tool Rene55 by Jon Gold, who worked at The Grid, is an example of this approach in action. Gold taught a computer to make meaningful typographic decisions56. Gold thinks that its not far from how human designers are taught, so he broke this learning process into several steps:

His idea is similar to what Roelof and Samim say: Tools should be creative partners for designers, not just dumb executants.

Golds experimental tool Rene is built on these principles58. He also talks about imperative and declarative approaches to programming and says that modern design tools should choose the latter focusing on what we want to calculate, not how. Jon uses vivid formulas to show how this applies to design and has already made a couple of low-level demos. You can try out the tool59 for yourself. Its a very early concept but enough to give you the idea.

While Jon jokingly calls this approach brute-force design and multiplicative design, he emphasizes the importance of a professional being in control. Notably, he left The Grid team earlier this year.

Unfortunately, there are no tools for product design for web and mobile that could help with analysis and synthesis on the same level as Autodesk Dreamcatcher does. However, The Grid and Wix could be considered more or less mass-level and straightforward solutions. Adobe is constantly adding features that could be considered intelligent: The latest release of Photoshop has a content-aware feature60 that intelligently fills in the gaps when you use the cropping tool to rotate an image or expand the canvas beyond the images original size.

There is another experiment by Adobe and University of Toronto. DesignScape61 automatically refines a design layout for you. It can also propose an entirely new composition.

You should definitely follow Adobe in its developments, because the company announced a smart platform named Sensei62 at the MAX 2016 conference. Sensei uses Adobes deep expertise in AI and machine learning, and it will be the foundation for future algorithm-driven design features in Adobes consumer and enterprise products. In its announcement63, the company refers to things such as semantic image segmentation (showing each region in an image, labeled by type for example, building or sky), font recognition (i.e. recognizing a font from a creative asset and recommending similar fonts, even from handwriting), and intelligent audience segmentation.

However, as John McCarthy, the late computer scientist who coined the term artificial intelligence, famously said, As soon as it works, no one calls it AI anymore. What was once cutting-edge AI is now considered standard behavior for computers. Here are a couple of experimental ideas and tools64 that could become a part of the digital product designers day-to-day toolkit:

But these are rare and patchy glimpses of the future. Right now, its more about individual companies building custom solutions for their own tasks. One of the best approaches is to integrate these algorithms into a companys design system. The goals are similar: to automate a significant number of tasks in support of the product line; to achieve and sustain a unified design; to simplify launches; and to support current products more easily.

Modern design systems started as front-end style guidelines, but thats just a first step (integrating design into code used by developers). The developers are still creating pages by hand. The next step is half-automatic page creation and testing using predefined rules.

Platform Thinking by Yury Vetrov (Source67)

Should your company follow this approach?

If we look in the near term, the value of this approach is more or less clear:

Altogether, this frees the designer from the routines of both development support and the creative process, but core decisions are still made by them. A neat side effect is that we will better understand our work, because we will be analyzing it in an attempt to automate parts of it. It will make us more productive and will enable us to better explain the essence of our work to non-designers. As a result, the overall design culture within a company will grow.

However, all of these benefits are not so easy to implement or have limitations:

There are also ethical questions: Is design produced by an algorithm valuable and distinct? Who is the author of the design? Wouldnt generative results be limited by a local maximum? Oliver Roeder says68 that computer art isnt any more provocative than paint art or piano art. The algorithmic software is written by humans, after all, using theories thought up by humans, using a computer built by humans, using specifications written by humans, using materials gathered by humans, in a company staffed by humans, using tools built by humans, and so on. Computer art is human art a subset, rather than a distinction. The revolution is already happening, so why dont we lead it?

This is a story of a beautiful future, but we should remember the limits of algorithms theyre built on rules defined by humans, even if the rules are being supercharged now with machine learning. The power of the designer is that they can make and break rules; so, in a year from now, we might define beautiful as something totally different. Our industry has both high- and low-skilled designers, and it will be easy for algorithms to replace the latter. However, those who can follow and break rules when necessary will find magical new tools and possibilities.

Moreover, digital products are getting more and more complex: We need to support more platforms, tweak usage scenarios for more user segments, and hypothesize more. As Frogs Harry West says, human-centered design has expanded from the design of objects (industrial design) to the design of experiences (encompassing interaction design, visual design and the design of spaces). The next step will be the design of system behavior: the design of the algorithms that determine the behavior of automated or intelligent systems. Rather than hire more and more designers, offload routine tasks to a computer. Let it play with the fonts.

(vf, il, al)

Back to top Tweet itShare on Facebook

Yury leads a team comprising UX and visual designers at one of the largest Russian Internet companies, Mail.Ru Group. His team works on communications, content-centric, and mobile products, as well as cross-portal user experiences. Both Yury and his team are doing a lot to grow their professional community in Russia.

See more here:

Algorithm-Driven Design: How Artificial Intelligence Is ...

Posted in Artificial Intelligence | Comments Off on Algorithm-Driven Design: How Artificial Intelligence Is …

Real FX – Slotless Racing with Artificial Intelligence

Posted: November 23, 2016 at 10:00 pm

Theres more to it than meets the eye with Sensor-Track. Our scientists developed the track using a base material of PVC, which was selected for its high tensile strength, flexibility and chemical resistance. This base material was then further re-enforced using an encapsulated micro-weave fabric, to maximise tear resistance while maintaining the desired balance between weight, thickness and strength. We then experimented with custom texture finishes, to determine the optimum amount of friction to allow the cars tyres to grip the track, but allow players to drift, or oversteer with opposite lock, through corners. The brief was to replicate as closely as possible the feel and interaction of a real car on a real race track.

The result is a track system that no matter how big you wish to go, is still portable enough to carry to your friends or the park, in a low weight and compact storage box. No lumpy bits of plastic or metal ensure Sensor-Track can also be truly flat packed for the most efficient stowage. You wont fill a whole wardrobe no matter how large your collection of track pieces becomes and your track and cars are 100% compatible with everyone elses, so you can go large. Very, very large.

Throughout the development we worked closely with model racing enthusiasts to build a modular system of track parts that delivers the maximum flexibility to design and build different race circuits. You can now take Le Mans to your mates, with extra Sensor-Track pieces such as short & sharp (R1) bends for tight circuits and wide sweeping (R2) bends, bottlenecks, and crossovers. Sensor Track allows you to build realistic representations of famous race tracks like Silverstone & Nurburgring! See the Track Builder where building tracks of the world are a reality.

Read this article:

Real FX - Slotless Racing with Artificial Intelligence

Posted in Artificial Intelligence | Comments Off on Real FX – Slotless Racing with Artificial Intelligence

Elon Musk’s artificial intelligence group signs Microsoft …

Posted: at 10:00 pm

Add to favorites

OpenAI has been backed by tech luminaries to the tune of $1bn.

Microsoft and Elon Musks artificial intelligence research group have signed a partnership.

Terms of the partnership will see OpenAI use Microsofts cloud, Azure, for its large-scale experiments.

The non-profit AI research organisation that is backed by Elon Musk, Peter Thiel, and the likes of Amazon Web Services and Infosys to the tune of $1bn, has the goal of advancing digital intelligence in a way that, is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

The partnership comes about partly due to Microsofts work on its Cognitive Toolkit, a

system for deep learning that is used to speed advances in areas such as speech and

image recognition, and because Azure already supports AI workloads with tools such as Azure Batch and Azure Machine Learning.

OpenAI has been an early adopter of Microsofts still in beta N-Series machines, a cloud computing service that is powered by Nvidia and now the two will partner on finding ways to advance AI research.

OpenAI said in a blog post: In the coming months we will use thousands to tens of thousands of these machines to increase both the number of experiments we run and the size of the models we train.

Microsoft has been pushing ahead of the cloud market when it comes to investing in artificial intelligence and introducing it into products. Cortana is probably the most well known of its AI offerings but it has also been applying the technology to medicine and bots, which it has begun rolling out to online help services.

Microsoft also today launched its Azure Bot Service. The service is designed to help developers to cost effectively host their bots on Azure.

OpenAI was created in December 2015 with a group of founding members that includes the like of Elon Musk. Ilya Sutskever is the research director and Greg Brockman is the CTO.

Read more:

Elon Musk's artificial intelligence group signs Microsoft ...

Posted in Artificial Intelligence | Comments Off on Elon Musk’s artificial intelligence group signs Microsoft …

Artificial Intelligence Course – Computer Science at CCSU

Posted: at 10:00 pm

Spring-2005 Classes: TR 5:15 pm - 6:30 pm, RVAC 107 Instructor: Dr. Zdravko Markov, MS 203, (860)-832-2711, http://www.cs.ccsu.edu/~markov/, e-mail: markovz@ccsu.edu Office hours: MW: 6:45 pm - 7:45 pm, TR: 10:00 am - 12:00 pm, or by appointment Catalog Description Artificial Intelligence ~ Spring. ~ [c] ~ Prereq.: CS 253 or (for graduates) CS 501. ~ Presentation of artificial intelligence as a coherent body of ideas and methods to acquaint the student with the classic programs in the field and their underlying theory. Students will explore this through problem-solving paradigms, logic and theorem proving, language and image understanding, search and control methods, and learning. Course Goals

The letter grades will be calculated according to the following table:

Late assignments will be marked one letter grade down for each two classes they are late. It is expected that all students will conduct themselves in an honest manner and NEVER claim work which is not their own. Violating this policy will result in a substantial grade penalty or a final grade of F.

To do the semester projects students have to form teams of 3 people (2-people teams should consult the instructor first). Each team chooses one project to work on. The projects to choose from are the following:

To complete the project students are required to:

Documentation and submission: Write a report describing the solutions to all problems and answers to all questions and mail it as an attachment to my instructors account for the WebCT (available through Campus Pipeline/My Courses/Artificial Intelligence).

Documentation and submission: Write a report describing the solutions to all problems and answers to all questions and mail it as an attachment to my instructors account for the WebCT (available through Campus Pipeline/My Courses/Artificial Intelligence).

Use the weather (tennis) data in tennis.pland do the following:

Documentation and submission: Write a report describing the solutions to all problems and mail it as an attachment to my instructors account for the WebCT (available through Campus Pipeline/My Courses/Artificial Intelligence).

The test includes the following topics:

The test includes the following topics:

Continued here:

Artificial Intelligence Course - Computer Science at CCSU

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Course – Computer Science at CCSU

FREE Artificial Intelligence Essay – Example Essays

Posted: at 10:00 pm

Artificial Intelligence (AI) is the area of computer science focusing on creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times and today with the advent of the computer and 50 years of research into AI programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chess player, and countless other feats never before possible. I focused on this area for my capstone because I thought it would be an original idea and also would be interesting to investigate and determine if artificial intelligence is a good concept or a bad to the human life. I wish to accomplish how it came about, the reasoning behind artificial intelligence, and where I think it will go in the future based on my research on this topic. . The Story Behind it All. Artificial intelligence has been around for longer then most people think. We all think that artificial intelligence has been in research for about 20 years or so. In all actuality after thousands of years of fantasy, the appearance of the digital computer, with its native, human-like ability to process symbols, made it seem that the myth of the man-made intelligence would finally become reality. The history of artificial intelligence all started in the 3rd century BC. Chinese engineer Mo Ti created mechanical birds, dragons, and warriors. Technology was being used to transform myth into reality.1. Much later, mechanical ducks and humanoid figures, crafted by clockmakers, endlessly amused the Royal courts of the Enlightenment-age Europe. It has long been possible to make machines that looked and moved in human-like ways.3 Machines that could spook and awe the audience - but creating a model of the mind, in that day in time were off limits.

See the original post here:

FREE Artificial Intelligence Essay - Example Essays

Posted in Artificial Intelligence | Comments Off on FREE Artificial Intelligence Essay – Example Essays

Artificial Intelligence Lockheed Martin

Posted: at 10:00 pm

For the commander facing an unconventional adversary; for the intelligence analyst trying to find the needle in the data haystack; or for the operator trying to maintain complex systems under degraded conditions or attack, todays warfighter faces problems of scale, complexity pace and resilience that outpace unaided human decision making. Artificial Intelligence (AI) provides the technology to augment human analysis and decision makers by capturing knowledge in computers in forms that can be re-applied in critical situations. This gives users the ability to react to problems that require analysis of massive data; demand fast-paced analysis and decision making, and that demand resilience in uncertain and changing conditions. AI offers the technology to change the human role from in-the-loop controller to on-the-loop thinker who can focus on a more reflective assessment of problems and strategies, guiding rather than being buried in execution detail. By creating technology that allows captured knowledge to continually evolve to incorporate new experience or changing user's needs, AI-based analysis and decision support tools can continue to assist the user long after its original knowledge becomes obsolete.

Key Technologies Artificial Intelligence is focused on the research, development, and transition of technologies that enable dynamic and real-time changes to knowledge bases that allow for informed, agile, and coordinated Command and Control decisions

The Artificial Intelligence group has an emphasis in four key thrust areas:

Artificial Intelligence is one of several Research Areas for the Informatics Laboratory.

Continue reading here:

Artificial Intelligence Lockheed Martin

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Lockheed Martin

What does artificial intelligence mean? – Definitions.net

Posted: at 10:00 pm

Artificial intelligence

Artificial intelligence is technology and a branch of computer science that studies and develops intelligent machines and software. Major AI researchers and textbooks define the field as "the study and design of intelligent agents", where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "the science and engineering of making intelligent machines". AI research is highly technical and specialised, deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. There are subfields which are focused on the solution of specific problems, on one of several possible approaches, on the use of widely differing tools and towards the accomplishment of particular applications. The central problems of AI research include reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects. General intelligence is still among the field's long term goals. Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others.

Here is the original post:

What does artificial intelligence mean? - Definitions.net

Posted in Artificial Intelligence | Comments Off on What does artificial intelligence mean? – Definitions.net

Artificial Intelligence in Medicine: An Introduction

Posted: at 10:00 pm

Acknowledgement The material on this page is taken from Chapter 19 of Guide to Medical Informatics, the Internet and Telemedicine (First Edition) by Enrico Coiera (reproduced here with the permission of the author). Introduction

From the very earliest moments in the modern history of the computer, scientists have dreamed of creating an 'electronic brain'. Of all the modern technological quests, this search to create artificially intelligent (AI) computer systems has been one of the most ambitious and, not surprisingly, controversial.

It also seems that very early on, scientists and doctors alike were captivated by the potential such a technology might have in medicine (e.g. Ledley and Lusted, 1959). With intelligent computers able to store and process vast stores of knowledge, the hope was that they would become perfect 'doctors in a box', assisting or surpassing clinicians with tasks like diagnosis.

With such motivations, a small but talented community of computer scientists and healthcare professionals set about shaping a research program for a new discipline called Artificial Intelligence in Medicine (AIM). These researchers had a bold vision of the way AIM would revolutionise medicine, and push forward the frontiers of technology.

AI in medicine at that time was a largely US-based research community. Work originated out of a number of campuses, including MIT-Tufts, Pittsburgh, Stanford and Rutgers (e.g. Szolovits, 1982; Clancey and Shortliffe, 1984; Miller, 1988). The field attracted many of the best computer scientists and, by any measure, their output in the first decade of the field remains a remarkable achievement.

In reviewing this new field in 1984, Clancey and Shortliffe provided the following definition:

Much has changed since then, and today this definition would be considered narrow in scope and vision. Today, the importance of diagnosis as a task requiring computer support in routine clinical situations receives much less emphasis (J. Durinck, E. Coiera, R. Baud, et al., "The Role of Knowledge Based Systems in Clinical Practice," in: eds Barahona and Christenen, Knowledge and Decisions in Health Telematics - The Next Decade, IOS Press, Amsterdam, pp. 199- 203, 1994), So, despite the focus of much early research on understanding and supporting the clinical encounter, expert systems today are more likely to be found used in clinical laboratories and educational settings, for clinical surveillance, or in data-rich areas like the intensive care setting. For its day, however, the vision captured in this definition of AIM was revolutionary.

After the first euphoria surrounding the promise of artificially intelligent diagnostic programmes, the last decade has seen increasing disillusion amongst many with the potential for such systems. Yet, while there certainly have been ongoing challenges in developing such systems, they actually have proven their reliability and accuracy on repeated occasions (Shortliffe, 1987).

Much of the difficulty has been the poor way in which they have fitted into clinical practice, either solving problems that were not perceived to be an issue, or imposing changes in the way clinicians worked. What is now being realised is that when they fill an appropriately role, intelligent programmes do indeed offer significant benefits. One of the most important tasks now facing developers of AI-based systems is to characterise accurately those aspects of medical practice that are best suited to the introduction of artificial intelligence systems.

In the remainder of this chapter, the initial focus will thus remain on the different roles AIM systems can play in clinical practice, looking particularly to see where clear successes can be identified, as well as looking to the future. The next chapter will take a more technological focus, and look at the way AIM systems are built. A variety of technologies including expert systems and neural networks will be discussed. The final chapter in this section on intelligent decision support will look at the way AIM can support the interpretation of patient signals that come off clinical monitoring devices.

In his opinion, there were no ultimately useful measures of intelligence. It was sufficient that an objective observer could not tell the difference in conversation between a human and a computer for us to conclude that the computer was intelligent. To cancel out any potential observer biases, Turing's test put the observer in a room, equipped with a computer keyboard and screen, and made the observer talk to the test subjects only using these. The observer would engage in a discussion with the test subjects using the printed word, much as one would today by exchanging e-mail with a remote colleague. If a set of observers could not distinguish the computer from another human in over 50% of cases, then Turing felt that one had to accept that the computer was intelligent.

Another consequence of the Turing test is that it says nothing about how one builds an intelligent artefact, thus neatly avoiding discussions about whether the artefact needed to in anyway mimic the structure of the human brain or our cognitive processes. It really didn't matter how the system was built in Turing's mind. Its intelligence should only to be assessed based upon its overt behaviour.

There have been attempts to build systems that can pass Turing's test in recent years. Some have managed to convince at least some humans in a panel of judges that they too are human, but none have yet passed the mark set by Turing.

An alternative approach to strong AI is to look at human cognition and decide how it can be supported in complex or difficult situations. For example, a fighter pilot may need the help of intelligent systems to assist in flying an aircraft that is too complex for a human to operate on their own. These 'weak' AI systems are not intended to have an independent existence, but are a form of 'cognitive prosthesis' that supports a human in a variety of tasks.

AIM systems are by and large intended to support healthcare workers in the normal course of their duties, assisting with tasks that rely on the manipulation of data and knowledge. An AI system could be running within an electronic medical record system, for example, and alert a clinician when it detects a contraindication to a planned treatment. It could also alert the clinician when it detected patterns in clinical data that suggested significant changes in a patient's condition.

Along with tasks that require reasoning with medical knowledge, AI systems also have a very different role to play in the process of scientific research. In particular, AI systems have the capacity to learn, leading to the discovery of new phenomena and the creation of medical knowledge. For example, a computer system can be used to analyse large amounts of data, looking for complex patterns within it that suggest previously unexpected associations. Equally, with enough of a model of existing medical knowledge, an AI system can be used to show how a new set of experimental observations conflict with the existing theories. We shall now examine such capabilities in more detail.

Expert or knowledge-based systems are the commonest type of AIM system in routine clinical use. They contain medical knowledge, usually about a very specifically defined task, and are able to reason with data from individual patients to come up with reasoned conclusions. Although there are many variations, the knowledge within an expert system is typically represented in the form of a set of rules.

There are many different types of clinical task to which expert systems can be applied.

Generating alerts and reminders. In so-called real-time situations, an expert system attached to a monitor can warn of changes in a patient's condition. In less acute circumstances, it might scan laboratory test results or drug orders and send reminders or warnings through an e-mail system.

Diagnostic assistance. When a patient's case is complex, rare or the person making the diagnosis is simply inexperienced, an expert system can help come up with likely diagnoses based on patient data.

Therapy critiquing and planning. Systems can either look for inconsistencies, errors and omissions in an existing treatment plan, or can be used to formulate a treatment based upon a patient's specific condition and accepted treatment guidelines.

Agents for information retrieval. Software 'agents' can be sent to search for and retrieve information, for example on the Internet, that is considered relevant to a particular problem. The agent contains knowledge about its user's preferences and needs, and may also need to have medical knowledge to be able to assess the importance and utility of what it finds.

Image recognition and interpretation. Many medical images can now be automatically interpreted, from plane X-rays through to more complex images like angiograms, CT and MRI scans. This is of value in mass-screenings, for example, when the system can flag potentially abnormal images for detailed human attention.

There are numerous reasons why more expert systems are not in routine use (Coiera, 1994). Some require the existence of an electronic medical record system to supply their data, and most institutions and practices do not yet have all their working data available electronically. Others suffer from poor human interface design and so do not get used even if they are of benefit.

Much of the reluctance to use systems simply arose because expert systems did not fit naturally into the process of care, and as a result using them required additional effort from already busy individuals. It is also true, but perhaps dangerous, to ascribe some of the reluctance to use early systems upon the technophobia or computer illiteracy of healthcare workers. If a system is perceived by those using it to be beneficial, then it will be used. If not, independent of its true value, it will probably be rejected.

Happily, there are today very many systems that have made it into clinical use. Many of these are small, but nevertheless make positive contributions to care. In the next two sections, we will examine some of the more successful examples of knowledge-based clinical systems, in an effort to understand the reasons behind their success, and the role they can play.

In the first decade of AIM, most research systems were developed to assist clinicians in the process of diagnosis, typically with the intention that it would be used during a clinical encounter with a patient. Most of these early systems did not develop further than the research laboratory, partly because they did not gain sufficient support from clinicians to permit their routine introduction.

It is clear that some of the psychological basis for developing this type of support is now considered less compelling, given that situation assessment seems to be a bigger issue than diagnostic formulation. Some of these systems have continued to develop, however, and have transformed in part into educational systems.

DXplain is an example of one of these clinical decision support systems, developed at the Massachusetts General Hospital (Barnett et al., 1987). It is used to assist in the process of diagnosis, taking a set of clinical findings including signs, symptoms, laboratory data and then produces a ranked list of diagnoses. It provides justification for each of differential diagnosis, and suggests further investigations. The system contains a data base of crude probabilities for over 4,500 clinical manifestations that are associated with over 2,000 different diseases.

DXplain is in routine use at a number of hospitals and medical schools, mostly for clinical education purposes, but is also available for clinical consultation. It also has a role as an electronic medical textbook. It is able to provide a description of over 2,000 different diseases, emphasising the signs and symptoms that occur in each disease and provides recent references appropriate for each specific disease.

Decision support systems need not be 'stand alone' but can be deeply integrated into an electronic medical record system. Indeed, such integration reduces the barriers to using such a system, by crafting them more closely into clinical working processes, rather than expecting workers to create new processes to use them.

The HELP system is an example of this type of knowledge-based hospital information system, which began operation in 1980 (Kuperman et al., 1990; Kuperman et al., 1991). It not only supports the routine applications of a hospital information system (HIS) including management of admissions and discharges and order entry, but also provides a decision support function. The decision support system has been actively incorporated into the functions of the routine HIS applications. Decision support provide clinicians with alerts and reminders, data interpretation and patient diagnosis facilities, patient management suggestions and clinical protocols. Activation of the decision support is provided within the applications but can also be triggered automatically as clinical data is entered into the patient's computerised medical record.

One of the most successful areas in which expert systems are applied is in the clinical laboratory. Practitioners may be unaware that while the printed report they receive from a laboratory was checked by a pathologist, the whole report may now have been generated by a computer system that has automatically interpreted the test results. Examples of such systems include the following.

Laboratory expert systems usually do not intrude into clinical practice. Rather, they are embedded within the process of care, and with the exception of laboratory staff, clinicians working with patients do not need to interact with them. For the ordering clinician, the system prints a report with a diagnostic hypothesis for consideration, but does not remove responsibility for information gathering, examination, assessment and treatment. For the pathologist, the system cuts down the workload of generating reports, without removing the need to check and correct reports.

All scientists are familiar with the statistical approach to data analysis. Given a particular hypothesis, statistical tests are applied to data to see if any relationships can be found between different parameters. Machine learning systems can go much further. They look at raw data and then attempt to hypothesise relationships within the data, and newer learning systems are able to produce quite complex characterisations of those relationships. In other words they attempt to discover humanly understandable concepts.

Learning techniques include neural networks, but encompass a large variety of other methods as well, each with their own particular characteristic benefits and difficulties. For example, some systems are able to learn decision trees from examples taken from data (Quinlan, 1986). These trees look much like the classification hierarchies discussed in Chapter 10, and can be used to help in diagnosis.

Medicine has formed a rich test-bed for machine learning experiments in the past, allowing scientists to develop complex and powerful learning systems. While there has been much practical use of expert systems in routine clinical settings, at present machine learning systems still seem to be used in a more experimental way. There are, however, many situations in which they can make a significant contribution.

Shortliffe EH. The adolescence of AI in medicine: will the field come of age in the '90s? Artif Intell Med. 1993 Apr;5(2):93-106. Review.

Read more here:

Artificial Intelligence in Medicine: An Introduction

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence in Medicine: An Introduction

The Non-Technical Guide to Machine Learning & Artificial …

Posted: at 10:00 pm

Shivon Zilis and James Cham, who invest in machine learning-related companies for Bloomberg Beta, recently created a machine intelligence market landscape.

Below, you can find links to the 317+ companies in the landscape (and a few more), and play around with some apps that are applying machine learning in interesting ways.

Algocian Captricity Clarifai Cortica Deepomatic DeepVision Netra Orbital Insight Planet Spaceknow

Capio Clover Intelligence Gridspace MindMeld Mobvoi Nexidia Pop Up Archive Quirious.ai TalkIQ Twilio

Alluvium C3 IoT Planet OS Maana KONUX Imubit GE Predix ThingWorx Uptake Sentenai Preferred Networks

Alation Arimo Cycorp

Deckard.ai Digital Reasoning IBM Watson Kyndi Databricks Sapho

Bottlenose CB Insights DataFox Enigma

Intelligent Layer Mattermark Predata Premise Quid Tracxn

ActionIQ Clarabridge Eloquent Labs Kasisto Preact Wise.io Zendesk

6sense AppZen Aviso Clari Collective[i] Fusemachines InsideSales Salesforce Einstein Zensight

AirPR BrightFunnel CogniCor Lattice LiftIgniter Mintigo msg.ai Persado Radius Retention Science

Cylance Darktrace Deep Instinct Demisto Drawbridge Networks Graphistry LeapYear SentinelOne SignalSense Zimperium

Entelo Algorithmia HiQ HireVue SpringRole Textio Unitive Wade & Wendy

AdasWorks Auro Robotics Drive.ai Google Mobileye nuTonomy Tesla Uber Zoox

Airware DJI DroneDeploy Lily Pilot AI Labs Shield AI Skycatch Skydio

Clearpath Robotics Fetch Robotics Harvest Automation JaybridgeRobotics Kindred AI Osaro Rethink Robotics

Amazon Alexa Apple Siri Facebook M Google Now/Allo Microsoft Cortana Replika

Alien Labs Butter.ai Clara Labs

Deckard.ai SkipFlag Slack Sudo Talla x.ai Zoom.ai

Abundant Robotics AgriData Blue River Technology Descartes Labs Mavrx Pivot Bio TerrAvion Trace Genomics Tule UDIO

AltSchool Content Technologies (CTI) Coursera Gradescope Knewton Volley

AlphaSense Bloomberg Cerebellum Capital Dataminr iSentium Kensho Quandl Sentient

Beagle Blue J Legal Legal Robot Ravel Law ROSS Intelligence Seal

Acerta ClearMetal Marble NAUTO PitStop Preteckt Routific

Calculario Citrine Eigen Innovations Ginkgo Bioworks Nanotronics Sight Machine Zymergen

Affirm Betterment Earnest Lendo Mirador Tala (a InVenture) Wealthfront ZestFinance

Atomwise CareSkore Deep6 Analytics IBM Watson Health Numerate Medical Oncora pulseData Sentrian Zephyr Health

DreamUp Vision

3Scan Arterys Bay Labs Butterfly Network Enlitic Google DeepMind Imagia

Atomwise Color Genomics Deep Genomics Grail iCarbonX Luminist Numerate Recursion Pharmaceuticals Verily Whole Biome

Automat Howdy Kasisto KITT.AI Maluuba Octane AI OpenAI Gym Semantic Machines

Ayasdi BigML Dataiku DataRobot Domino Data Lab Kaggle RapidMiner Seldon

Spark Beyond Yhat Yseop

Bonsai ScaleContext Relevant Cycorp Datacratic deepsense.io Geometric Intelligence H2O.ai HyperScience Loop AI Labs minds.ai Nara LogicsReactive Scaled Inference Skymind SparkCognition

Agolo AYLIEN Cortical.io Lexalytics Loop AI Labs Luminoso MonkeyLearn Narrative Science spaCy

AnOdot Bonsai

Deckard.ai Fuzzy.ai Hyperopt Kite Layer 6 AI Lobe.ai RainforestQA SignifAI SigOpt

Amazon Mechanical Turk CrowdAI CrowdFlower Datalogue DataSift diffbot Enigma Import.io Paxata Trifacta WorkFusion

Amazon DSSTNE Apache Spark Azure ML Baidu Caffe Chainer DeepLearning4j H2O.ai Keras Microsoft CNTK Microsoft DMTK MLlib MXNet Nervana Neon PaddlePaddle scikit-learn TensorFlow Theano Torch7 Weka

1026 Labs Cadence Cirrascale Google TPU Intel (Nervana) Isocline KNUPATH NVIDIA DGX-1/Titan X Qualcomm Tenstorrent Tensilica

Cogitai Kimera Knoggin NNAISENSE Numenta OpenAI Vicarious

Andrew Ng Chief Scientist of Baidu; Chairman and Co-Founder of Coursera; Stanford CS faculty.

Sam Altman President, YC Group, OpenAI co-chairman.

Harry Shum EVP, Microsoft AI and Research.

Geoffrey Hinton The godfather of deep learning.

Samiur Rahman CEO of Canopy. Former Data Engineering Lead at Mattermark.

Jeff Dean Google Senior Fellow at Google, Inc. Co-founder and leader of Googles deep learning research and engineering team.

Eric Horvitz Technical Fellow at Microsoft Research

Denny Britz Deep Learning at Google Brain.

Tom Mitchell Computer scientist and E. Fredkin University Professor at the Carnegie Mellon University.

Chris Dixon General Partner at Andreessen Horowitz.

Hilary Mason Founder at FastForwardLabs. Data Scientist in Residence at Accel.

Elon Musk Tesla Motors, SpaceX, SolarCity, PayPal & OpenAI.

Kirk Borne The Principal Data Scientist at Booz Allen, PhD Astrophysicist.

Peter Skomoroch Co-Founder & CEO SkipFlag. Previously Principal Data Scientist at LinkedIn, Engineer at AOL.

Paul Barba Chief Scientist at Lexalytics.

Andrej Karpathy Research scientist at OpenAI. Previously CS PhD student at Stanford.

Monica Rogati Former VP of Data Jawbone & LinkedIn data scientist.

Xavier Amatriain Leading Engineering at Quora. Netflix alumni.

Mike Gualtieri Forrester VP & Principal Analyst.

Fei-Fei Li Professor of Computer Science, Stanford University, Director of Stanford AI Lab.

David Silver Royal Society University Research Fellow.

Nando de Freitas Professor of Computer ScienceFellow, Linacre College.

Roberto Cipolla Department of Engineering, University of Cambridge.

Gabe Brostow Associate Professor in Computer Science at Londons Global University.

Arthur Gretton Associate Professor with the Gatsby Computational Neuroscience Unit.

Ingmar Posner University Lecturer in Engineering Science at the University of Oxford.

Pieter Abbeel Associate Professor, UC Berkeley, EECS. Berkeley Artificial Intelligence Research (BAIR) laboratory. UC Berkeley Center for Human Compatible AI. Co-Founder Gradescope.

Josh Wills Slack Data Engineering and Apache Crunch committer.

Noah Weiss Head of Search, Learning, & Intelligence at Slack in NYC. Former SVP of Product at foursquare + Google PM on structured search.

Michael E. Driscoll Founder, CEO Metamarkets. Investor at Data Collective

Drew Conway Founder and CEO of Alluvium.

Sean Taylor Facebook Data Science Team

Demis Hassabis Co-Founder & CEO, DeepMind.

Randy Olson Senior Data Scientist at Penn Institute for Biomedical Informatics.

Shivon Zilis Partner at Bloomberg Beta where she focuses on machine intelligence companies.

Adam Gibson Founder of Skymind.

Alexandra Suich Technology reporter for The Economist.

Anthony Goldblum Co-founder and CEO of Kaggle.

Avi Goldfarb Professor at Rotman, University of Toronto and the Chief Data Scientist at Creative Destruction Lab.

Ben Lorica Chief Data Scientist of O'Reilly Media, and Program Director of OReilly Strata & OReillyAI conferences. Ben hosts the OReilly Data Show Podcast too.

Chris Nicholson Co-founder Deeplearning4j & Skymind. Previous to that, Chris worked at The New York Times.

Doug Fulop Product manager at Kindred.ai.

Dror Berman Founder, Innovation Endeavors.

Dylan Tweney Founder of @TweneyMedia, former EIC @venturebeat, ex-@WIRED, publisher of @tinywords.

Gary Kazantsev R&D Machine Learning at Bloomberg LP.

Gideon Mann Head of Data Science / CTO Office at Bloomberg LP.

Gordon Ritter Cloud investor at Emergence Capital, cloud entrepreneur.

Jack Clark Strategy and Communications Director OpenAI. Past: @business Worlds Only Neural Net Reporter. @theregister Distributed Systems Reporter.

Federico Pascual COO & Co-Founder, MonkeyLearn.

Matt Turck VC at FirstMark Capital and the organizer of Data Driven NYC and Hardwired NYC.

Nick Adams Data Scientist, Berkeley Institute for Data Science.

Roger Magoulas Research Director, OReilly Media.

Sean Gourley Former CEO, Quid.

Shruti Gandhi Array.VC, previously at True & Samsung Ventures.

Steve Jurvetson Partner at Draper Fisher Jurvetson.

Vijay Sundaram Venture Capitalist Innovation Endeavors, Tinkerer Polkadot Labs.

Zavain Dar VC Lux Capital, Lecturer Stanford University, Moneyball Philadelphia 76ers.

Yann Lecun Director of AI Research, Facebook. Founding Director of the NYU Center for Data Science

Read the rest here:

The Non-Technical Guide to Machine Learning & Artificial ...

Posted in Artificial Intelligence | Comments Off on The Non-Technical Guide to Machine Learning & Artificial …

Artificial Intelligence – Graduate Schools of Science …

Posted: at 10:00 pm

Artificial Intelligence (AI) is a field that develops intelligent algorithms and machines. Examples include: self-driving cars, smart cameras, surveillance systems, robotic manufacturing, machine translations, internet searches, and product recommendations. Modern AI often involves self-learning systems that are trained on massive amounts of data ("Big Data"), and/or interacting intelligent agents that perform distributed reasoning and computation. AI connects sensors with algorithms and human-computer interfaces, and extends itself into large networks of devices. AI has found numerous applications in industry, government and society, and is one of the driving forces of today's economy.

The Master's programme in Amsterdam has a technical approach towards AI research. It is a joint programme of the University of Amsterdam and VrijeUniversiteit Amsterdam. This collaboration guarantees a wide range of topics, all taught by world renownedresearchers who are experts in their field.

In this Master's programme we offer a comprehensive collection of courses. It includes:

Next to the general AI programme we offer specialisations in:

Published by GSI

Excerpt from:

Artificial Intelligence - Graduate Schools of Science ...

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence – Graduate Schools of Science …

Page 210«..1020..209210211212