Q&A: SnapLogic CTO makes the case for investment in machine learning – IT Brief Australia

SnapLogic field chief technology officer Brad Drysdale discusses the roadblocks to a successful machine learning implementation and the ways they can overcome them.

The ML process can seem daunting for many organisations due to its unpredictable and experimental nature.

When IT teams initially start going through the data to deploy ML algorithms, most probably wont know the type of data that is required. Alot of exploration must be done before IT decision-makers have an idea of what data will be useful, and which ML algorithms will work best to solve a particular problem.

Other technical challenges include being able to automate data access. When organisations have formulated clear policies that allow easy access to real-time data, they need to consider how to set up a channel or a pipeline to access the data.

Organisations also need to ensure that there is availability of constant real-time data. ML models should not be trained on a single fixed set of data, so organisations need to set them up so that they can retrain their models to adapt to the changing behaviour of the data and the systems that theyre working with.

Additionally, there is a significant talent shortage. While the number of qualified data scientists is growing, there are only a small number of data scientists that we can produce each year.

There are concerns about access to certain types of data, particularly when you have different groups of employees or other stakeholders coming in at different times to work on projects. So organisations should consider filtering out any potentially sensitive information from the data first so that the rest can be used to deploy ML algorithms.

Another issue to overcome is how to fulfil the demand for data scientists. While its great to see that more data scientists are emerging in the workplace, a lot of time still goes into training them, so supply is still not keeping up with the rising demand.

However, more people who have been trained in other areas, such as senior business analysts and software engineers, are increasingly expanding their knowledge of data science and ML, which can help bridge that gap.

Additionally, organisations will have IT business analysts who have experience with handling databases, even if theyre not programmers, theyre still analytically minded, so they can take advantage of ML through self-training too.

All of these developments are following a positive trend, as tools and platforms are beginning to allow a broader range of users to engage with ML and make it useful for them.

I see two main misconceptions about ML that relate to its complexity and capabilities. First, businesses often think that ML is very complex and requires PhDs to get value out of an implementation.

Many relatively simple ML algorithms can be applied to business data to provide predictions or classifications. On the other hand, there is an unrealistic conception that ML is a panacea to all business problems. The sweet spot is understanding the realistic capabilities of different, well-understood ML algorithms and match them with the right business data to derive real value.

Businesses of all sizes should be working with universities or creating apprenticeship programmes to bring in fresh talent.

For example, the Computer Science department at the University of San Francisco offers a project course for both undergraduates and graduates for one term, where they typically do small project work with an industry sponsor. This not only allows students to work with a variety of different companies, but it also makes the recruitment process for businesses far easier.

Another way to help bridge the skills gap is through investment in technology that will lift the burden off IT professionals.

Low-code/no-code platforms are a prime example of this, as they can enable data tasks to be undertaken by people outside of the IT department working in the lines of business.

Currently, having the right data to deploy ML algorithms is an incredibly time-consuming process. Alot of time is spent trying to get access to and sift through vast volumes of disorganised data with manual coding, leaving IT professionals little time to focus on higher-value tasks.

By investing in the right low-code/no-code technology, businesses can easily automate data pipelines, giving all departments regular access to real-time data, and make ML processes as seamless as possible with little to no coding required.

Businesses can look at how investment in emerging technologies will benefit them in two ways either to get ahead of their competition or to prevent their organisation from becoming obsolete.

Businesses need to follow and even move ahead of technology trends to not only offer a better experience and more effective utilisation of their resources but continue to provide the services that their users and customers expect.

Eventually, all organisations will need to adopt ML simply because that will become an expectation, so that applications and services can better anticipate what their users are attempting to do and to provide recommendations or predictions that enable them to achieve their goals more rapidly.

This doesnt just apply to just investment in the technologies, but the skills training as well. Businesses need to ensure that people are trained well to utilise these technologies, but also continue to help expand their skill set to harness the full potential of these emerging technologies.

View post:
Q&A: SnapLogic CTO makes the case for investment in machine learning - IT Brief Australia

The Convergence of RPA and Automated Machine Learning – AiiA

Add bookmark

The future is now. We've been discussing the fact that RPA truly transforms the costs, accuracy, productivity, speed and efficiency of your enterprise. That transformation is all the more powerful with cognitive solutions baked-in.

Our old friends at Automation Anywhere combine forces with our new friends at DataRobot to discuss the integration and convergence of RPA & Automated ML and how that combination can hurdle your enterprise further through this fourth industrial revolution.

Watch the session on demand now.

The Convergence of RPA and Automated Machine LearningGreg van Rensburg, Director, Solutions Consulting,Automation AnywhereColin Priest, Vice President, AI Strategy,DataRobot

Robotic Process Automation (RPA) has disrupted repetitive business processes across a variety of industries. The combination of RPA, cognitive automation, and analytics is a game changer for unstructured data-processing and for gaining real-time insights. The next frontier? A truly complete, end-to-end process automation with AI-powered decision-making and predictive abilities. Join Automation Anywhere and DataRobot at this session to learn how organisations are using business logic and structured inputs, through a combination of RPA and Automated Machine Learning, to automate business processes, reduce customer churn and transform to digital operating models.

More here:
The Convergence of RPA and Automated Machine Learning - AiiA

Machine Learning Software is Now Doing the Exhausting Task of Counting Craters On Mars – Universe Today

Does the life of an astronomer or planetary scientists seem exciting?

Sitting in an observatory, sipping warm cocoa, with high-tech tools at your disposal as you work diligently, surfing along on the wavefront of human knowledge, surrounded by fine, bright people. Then one dayEureka!all your hard work and the work of your colleagues pays off, and you deliver to humanity a critical piece of knowledge. A chunk of knowledge that settles a scientific debate, or that ties a nice bow on a burgeoning theory, bringing it all together. ConferencestenureNobel Prize?

Well, maybe in your first year of university you might imagine something like that. But science is work. And as we all know, not every minute of ones working life is super-exciting and gratifying.

Sometimes it can be dull and repetitious.

Its probably not anyones dream, when they begin their scientific education, to sit in front of a computer poring over photos of the surface of Mars, counting the craters. But someone has to do it. How else would we all know how many craters there are?

Mars is the subject of intense scientific scrutiny. Telescopes, rovers, and orbiters are all working to unlock the planets secrets. There are a thousand questions concerning Mars, and one part of understanding the complex planet is understanding the frequency of meteorite strikes on its surface.

NASAs Mars Reconnaissance Orbiter (MRO) has been orbiting Mars for 14.5 years now. Along with the rest of its payload, the MRO carries cameras. One of them is called the Context (CTX) Camera. As its name says, it provides context for the other cameras and instruments.

MROs powerhouse camera is called HiRISE (High-Resolution Imaging Science Experiment). While the CTX camera takes wider view images, HiRISE zooms in to take precision images of details on the surface. The pair make a potent team, and HiRISE has treated us to more gorgeous and intriguing pictures of Mars than any other instrument.

But the cameras are kind of dumb in a scientific sense. It takes a human being to go over the images. As a NASA press release tells us, it can take 40 minutes for one researcher to go over a CTX image, hunting for small craters. Over the lifetime of the MRO so far, researchers have found over 1,000 craters this way. Theyre not just looking for craters, theyre interested in any changes on the surface: dust devils, shifting dunes, landslides, and the like.

AI researchers at NASAs Jet Propulsion Laboratory in Southern California have been trying to do something about all the time it takes to find things of interest in all of these images. Theyre developing a machine learning tool to handle some of that workload. On August 26th, 2020, the tool had its first success.

On some date between March 2010 and May 2012, a meteor slammed into Mars thin atmosphere. It broke into several pieces before it struck the surface, creating what looks like nothing more than a black speck in CTX camera images of the area. The new AI tool, called an automated fresh impact crater classifier, found it. Once it did, NASA used HiRISE to confirm it.

That was the classifiers first find, and in the future, NASA expects AI tools to do more of this kind of work, freeing human minds up for more demanding thinking. The crater classifier is part of a broader JPL effort named COSMIC (Capturing Onboard Summarization to Monitor Image Change). The goal is to develop these technologies not only for MRO, but for future orbiters. Not only at Mars, but wherever else orbiters find themselves.

Machine learning tools like the crater classifier have to be trained. For its training, it was fed 6,830 CTX camera images. Among those images were ones containing confirmed craters, and others that contained no craters. That taught the tool what to look for and what not to look for.

Once it was trained, JPL took the systems training wheels off and let it loose on over 110,000 images of the Martian surface. JPL has its own supercomputer, a cluster containing dozens of high-performance machines that can work together. The result? The AI running on that powerful machine took only five seconds to complete a task that takes a human about 40 minutes. But it wasnt easy to do.

It wouldnt be possible to process over 112,000 images in a reasonable amount of time without distributing the work across many computers, said JPL computer scientist Gary Doran, in a press release. The strategy is to split the problem into smaller pieces that can be solved in parallel.

But while the system is powerful, and represents a huge savings of human time, it cant operate without human oversight.

AI cant do the kind of skilled analysis a scientist can, said JPL computer scientist Kiri Wagstaff. But tools like this new algorithm can be their assistants. This paves the way for an exciting symbiosis of human and AI investigators working together to accelerate scientific discovery.

Once the crater finder scores a hit in a CTX camera image, its up to HiRISE to confirm it. That happened on August 26th, 2020. After the crater finder flagged a dark smudge in a CTX camera image of a region named Noctis Fossae, the power of the HiRISE took scientists in for a closer look. That confirmed the presence of not one crater, but a cluster of several resulting from the objects that struck Mars between March 2010 and May 2012.

With that initial success behind them, the team developing the AI has submitted more than 20 other CTX images to HiRISE for verification.

This type of software system cant run on an orbiter, yet. Only an Earth-bound supercomputer can perform this complex task. All of the data from CTX and HiRISE is sent back to Earth, where researchers pore over it, looking for images of interest. But the AI researchers developing this system hope that will change in the future.

The hope is that in the future, AI could prioritize orbital imagery that scientists are more likely to be interested in, said Michael Munje, a Georgia Tech graduate student who worked on the classifier as an intern at JPL.

Theres another important aspect to this development. It shows how older, still-operational spacecraft can be sort of re-energized with modern technological power, and how scientists can wring even more results from them.

Ingrid Daubar is one of the scientists working on the system. She thinks that this new tool will help find more craters that are eluding human eyes. And if it can, itll help build our knowledge of the frequency, shape, and size of meteor strikes on Mars.

There are likely many more impacts that we havent found yet, Daubar said. This advance shows you just how much you can do with veteran missions like MRO using modern analysis techniques.

This new machine learning tool is part of a broader-based NASA/JPL initiative called COSMIC (Content-based On-board Summarization to Monitor Infrequent Change.) That initiative has a motto: Observe much, return best.

The idea behind COSMIC is to create a robust, flexible orbital system for conducting planetary surveys and change monitoring in the Martian environment. Due to bandwidth considerations, many images are never downloaded to Earth. Among other goals, the system will autonomously detect changes in non-monitored areas, and provide relevant, informative descriptions of onboard images to advise downlink prioritization. The AI that finds craters is just one component of the system.

Data management is a huge and growing challenge in science. Other missions like NASAs Kepler planet-hunting spacecraft generated an enormous amount of data. In an effort that parallels what COSMIC is trying to do, scientists are using new methods to comb through all of Keplers data, sometimes finding exoplanets that were missed in the original analysis.

And the upcoming Vera C. Rubin Survey Telescope will be another data-generating monster. In fact, managing all of its data is considered to be the most challenging part of that entire project. Itll generate about 200,000 images per year, or about 1.28 petabytes of raw data. Thats far more data than humans will be able to deal with.

In anticipation of so much data, the people behing the Rubin Telescope developed the the LSSTC Data Science Fellowship Program. Its a two-year program designed for grad school curriculums that will explore topics including statistics, machine learning, information theory, and scalable programming.

Its clear that AI and machine learning will have to play a larger role in space science. In the past, the amount of data returned by space missions was much more manageable. The instruments gathering the data were simpler, the cameras were much lower resolution, and the missions didnt last as long (not counting the Viking missions.)

And though a system designed to find small craters on the surface of Mars might not capture the imagination of most people, its indicative of what the future will hold.

One day, more scientists will be freed from sitting for hours at a time going over images. Theyll be able to delegate some of that work to AI systems like COSMIC and its crater finder.

Well probably all benefit from that.

Like Loading...

Here is the original post:
Machine Learning Software is Now Doing the Exhausting Task of Counting Craters On Mars - Universe Today

Samsung launches online programme to train UAE youth in AI and machine learning – The National

Samsung is rolling out a new course offering an introduction to machine learning and artificial intelligence in the UAE.

The course, which is part of its global Future Academy initiative, will target UAE residents between the ages of 18 and 35 with a background in science, technology, engineering and mathematics and who are interested in pursuing a career that would benefit from knowledge of AI, the South Korean firm said.

The five-week programme will be held online and cover subjects such as statistics, algorithms and programming.

The launch of the Future Academy in the UAE reaffirms our commitment to drive personal and professional development and ensure this transcends across all areas in which we operate, said Jerric Wong, head of corporate marketing at Samsung Gulf Electronics.

In July, Samsung announced a similar partnership with Misk Academy to launch AI courses in Saudi Arabia.

The UAE, a hub for start-ups and venture capital in the the Arab world, is projected to benefit the most in the region from AI adoption. The technology is expected to contribute up to 14 per cent to the countrys gross domestic product equivalent to Dh352.5 billion by 2030, according to a report by consultancy PwC.

In Saudi Arabia, AI is forecast to add 12.4 per cent to GDP.

Held under the theme be ready for tomorrow by learning about it today, the course will be delivered through a blended learning and self-paced format. Participants can access presentations and pre-recorded videos detailing their course materials.

Through the Future Academys specialised curriculum, participants will learn about the tools and applications that feature prominently in AI and machine learning-related workplaces, Samsung said.

The programme promises to be beneficial, providing the perfect platform for determined beginners and learners to build their knowledge in machine learning and establishing a strong understanding of the fundamentals of AI, it added.

Applicants can apply here by October 29.

Updated: October 6, 2020 07:57 PM

Excerpt from:
Samsung launches online programme to train UAE youth in AI and machine learning - The National

Long-term PM 2.5 Exposure and the Clinical Application of Machine Learning for Predicting Incident Atrial Fibrillation – DocWire News

Clinical impact of fine particulate matter (PM2.5) air pollution on incident atrial fibrillation (AF) had not been well studied. We used integrated machine learning (ML) to build several incident AF prediction models that include average hourly measurements of PM2.5for the 432,587 subjects of Korean general population. We compared these incident AF prediction models using c-index, net reclassification improvement index (NRI), and integrated discrimination improvement index (IDI). ML using the boosted ensemble method exhibited a higher c-index (0.845 [0.837-0.853]) than existing traditional regression models using CHA2DS2-VASc (0.654 [0.646-0.661]), CHADS2(0.652 [0.646-0.657]), or HATCH (0.669 [0.661-0.676]) scores (each p < 0.001) for predicting incident AF.

As feature selection algorithms identified PM2.5as a highly important variable, we applied PM2.5for predicting incident AF and constructed scoring systems. The prediction performances significantly increased compared with models without PM2.5(c-indices: boosted ensemble ML, 0.954 [0.949-0.959]; PM-CHA2DS2-VASc, 0.859 [0.848-0.870]; PM-CHADS2, 0.823 [0.810-0.836]; or PM-HATCH score, 0.849 [0.837-0.860]; each interaction, p < 0.001; NRI and IDI were also positive). ML combining readily available clinical variables and PM2.5data was found to predict incident AF better than models without PM2.5or even established risk prediction approaches in the general population exposed to high air pollution levels.

See more here:
Long-term PM 2.5 Exposure and the Clinical Application of Machine Learning for Predicting Incident Atrial Fibrillation - DocWire News

Top Machine Learning Companies in the World – Virtual-Strategy Magazine

Machine learning is a complex field of science that has to do with scientific research and a deep understanding of computer science. Your vendor must have proven experience in this field.

In this post, we have collected 15 top machine learning companies worldwide. Each of them has at least 5 years of experience, has worked on dozens of ML projects, and enjoys high rankings on popular online aggregators. We have carefully studied their portfolios and what ex-clients say about working with them. Contracting a vendor from this list, you can be sure that you receive the highest quality.

Best companies for machine learning

1. Serokell

Serokell is a software development company that focuses on R&D in programming and machine learning. Serokell is the founder of Serokell Labs an interactive laboratory that studies new theories of pure and applied mathematics, academic and practical applications of ML.

Serokell is an experienced, fast-growing company that unites qualified software engineers and scientists from all over the world. Combining scientific research and data-based approach with business thinking, they manage to deliver exceptional products to the market. Serokell has experience working with custom software development in blockchain, fintech, edtech, and other fields.

2. Dogtown Media

Dogtown Media is a software vendor that applies artificial intelligence and machine learning in the field of mobile app development. AI helps them to please their customers with outstanding user experience and help businesses to scale and develop. Using machine learning for mobile apps, they make them smarter, more efficient, and accurate.

Among the clients of Dogtown Media are Google, Youtube, and other IT companies and startups that use machine learning daily.

3. Iflexion

This custom software development company covers every aspect of software engineering including machine learning.

Inflexion has more than 20 years of tech experience. They are proficient at building ML-powered web applications for e-commerce as well as applying artificial intelligence technologies for e-learning, augmented reality, computer vision, and big data analytics. In their portfolio, you can find a dating app with a recommender system, a travel portal, and countless business intelligence projects that prove their expertise in the field.

4. ScienceSoft

ScienceSoft is an experienced provider of top-notch IT services that works across different niches. They have a portfolio full of business-minded projects in data analytics, internet of things, image analysis, and e-commerce.

Working with ScienceSoft, you trust your project in the hands of R&D masters who can take over the software development process. The team makes fast data-driven decisions and delivers high-quality products in reduced time.

5. Increon

If you are looking for an innovative software development company that helps businesses to amplify their net impact to customers and employees, pay attention to Increon.

This machine-learning software vendor works with market leaders in different niches and engineers AI strategies for their business prosperity. Icreon has firsthand, real-world experience building out applications, platforms, and ecosystems that are driven by machine learning and artificial intelligence.

6. Hidden Brains

Hidden Brains is a software development firm that specializes in AI, ML, and IoT. During 17 years of their existence, they have used their profound knowledge of the latest technologies to deliver projects for healthcare, retail, education, fintech, logistics, and more.

Hidden Brains offers a broad set of machine learning and artificial intelligence consulting services, putting the power of machine learning in the hands of every startupper and business owner.

7. Imaginovation

Imaginovation was founded in 2011 and focuses on web design and development. It actively explores all the possibilities of artificial intelligence in their work.

The agencys goal is to boost the business growth of its clients by providing software solutions for recommendation engines, automated speech and text translation, and effectiveness assessment. Most high-profile clients are Nestle and MetLife.

8. Cyber Infrastructure

Cyber Infrastructure is among the leading machine learning companies with more than 100 projects in their portfolio. With their AI solutions, they have impacted a whole variety of industries: from hospitality and retail to fintech and Hitech.

The team specializes in using advanced technologies to develop AI-powered applications for businesses worldwide. Their effort to create outstanding projects has been recognized by Clutch, Good Firms, and AppFutura.

9. InData Labs

InData Labs is a company that delivers a full package of AI-related services including data strategy and AI consulting and AI software development. They have plenty of experience working with the technologies of machine learning, NLP, computer vision, and predictive modeling.

InData Labs analyses its clients capabilities and needs, designs a future product concept, inserts the ML system into any production type, and improves the previously built models.

10. Spire Digital

Spire Digital is one of the most eminent AI development companies in the USA. They have worked on more than 600 cases and have deep expertise in applying AI in the fields of finance, education, logistics, healthcare, and media. Among other tasks, Spire Digital helps with building and integrating AI into security systems and smart home systems.

Over more than 20 years, the company has managed to gain main awards including #1 Software Developer In The World from Clutch.co and Fastest Growing Companies In America from Inc. 5000.

Conclusion

Working with a top developer, you choose high-quality software development and extensive expertise in machine learning. They apply the most cutting-edge technologies in order to help your business expand and grow.

Media ContactCompany Name: SerokellContact Person: Media RelationsEmail: Send EmailPhone: (+372) 699-1531Country: EstoniaWebsite: https://serokell.io/

Link:
Top Machine Learning Companies in the World - Virtual-Strategy Magazine

TinyML And Its ‘Great’ Application in IoT Technology – Analytics India Magazine

Tiny machine learning (TinyML) is an embedded software technology that can be used to build low power consuming devices to run machine learning models. It is also more famously referred to as the missing link between device intelligence and edge hardware. It makes computing at edge cheaper, less expensive, and more stable. Further, TinyML also facilitates improved response time, privacy, and low energy cost.

TinyML is massively growing in popularity with every passing year. As per ABI Research, a global tech market advisory firm, by 2030, about 230 billion devices will be shipped with TinyML chipset.

TinyML has the ability to provide a range of applications, from imagery micro-satellite, wildfire detection, and for identifying crop ailments and animal illness. Another area of application that is drawing great attention is its application in IoT devices.

TinyML brings ultra-low-power systems and machine learning communities together; this paves the way for more exciting on-device machine learning. TinyML is placed at the intersection of embedded machine learning applications, algorithms, hardware, and software. As compared with a desktop CPU, which consumes 100 watts of power, TinyML just required a few milliwatts of battery power. With such a major advantage, TinyML can provide great longevity to always-on ML applications at the edge/endpoint.

Currently, there are 250 billion microcontrollers in the world today. This number is growing by 30 billion annually. The reason for its pervasiveness is that, firstly, it gives small devices the ability to make smart decisions without needing to send the data to the cloud. Further, TinyML models are small enough to fit into almost any environment. Taking the example of an imagery micro-satellite which are required to capture high-resolution images but are restricted by the size and number of photos they can transmit back to Earth. With TinyML, however, the microsatellite only captures an image if there was an object of interest such as a ship or weather pattern.

TinyML has the potential to transform the way one deals with IoT data, where billions of tiny devices are already used to provide greater efficiency in fields of medicine, automation, and manufacturing.

It is very important to make a clear distinction between serving machine learning to IoT and developing machine learning inside the IoT devices. In the former, the machine learning tasks are outsourced to the cloud, while the IoT device waits for the execution of intelligent services, however, in latter, TinyML-as-a-service is employed, and the IoT device is part of the execution of the services. The TinyML represents a connecting point between the IoT devices and the ML.

The hardware requirements for machine learning in larger systems are analogous to TinyML in smaller IoT. As the size of IoT devices hitting the market increase, we could see even higher investment in terms of research in TinyML, exploring concepts such as deep neural networks, model compression, and deep reinforcement learning.

There are a few challenges of integrating TinyML in the IoT devices; some of them are:

Speaking in detail about the applications of TinyML, it can be used in sensors for real-time traffic management and ease of urban mobility; in manufacturing, TinyML can be used to enable real-time decision making to identify equipment failure. The workers can be alerted to perform preventive maintenance based on the equipment conditions; TinyML can also be used in the retail business for monitoring the availability of the resource.

TinyML is gaining its ground but is still in a very nascent stage. It is expected to take over space with inter-sector applications very soon.

I am a journalist with a postgraduate degree in computer network engineering. When not reading or writing, one can find me doodling away to my hearts content.

Excerpt from:
TinyML And Its 'Great' Application in IoT Technology - Analytics India Magazine

Machine learning to transform delivery of major rail projects in UK – Global Railway Review

By utilising machine learning, Network Rail can increase prediction accuracy, reduce delays, unlock early risk detection and enable significant cost savings.

Credit: Network Rail

Network Rail has announced that it is working with technology startup nPlan to use machine learning technology across its portfolio of projects, which has the potential to transform the way major rail projects are delivered across Britain.

Through using data from past projects to produce accurate cost and time forecasts, the partnership will deliver efficiencies in the way projects are planned and carried out, and improve service reliability for passengers by reducing the risk of overruns.

In a world-first for such work on this scale, Network Rail tested nPlans risk analysis and assurance solution on two of its largest rail projects on the Great Western Main Line and the Salisbury to Exeter Signalling project representing over 3 billion of capital expenditure.

This exercise showed that, by leveraging past data, cost savings of up to 30 million could have been achieved on the Great Western Main Line project alone. This was primarily achieved by flagging unknown risks to the project team those that are invisible to the human eye due to the size and complexity of the project data and allowing them to mitigate those risks before they occur at a significantly lower cost than if they are missed or ignored.

The machine learning technology works by learning from patterns in historical project performance. Put simply, the algorithm learns by comparing what was planned against what actually happened on a project at an individual activity level. This facilitates transparency and a shared, improved view of risk between project partners.

Following the success of this trial, nPlan and Network Rail will now embark on the next phase of deployment, rolling out the software on 40 projects before scaling up on all Network Rail projects by mid-2021. Using data from over 100,000 programmes, Network Rail will increase prediction accuracy, reduce delays, allow for better budgeting and unlock early risk detection, leading to greater certainty in the outcome of these projects.

Network Rails Programme Director for Affordability, Alastair Forbes, said: By championing innovation and using forward-thinking technologies, we can deliver efficiencies in the way we plan and carry out rail upgrade and maintenance projects. It also has the benefit of reducing the risk of project overruns, which means, in turn, we can improve reliability for passengers.

Dev Amratia, CEO and co-founder of nPlan, said: Network Rail is amongst the largest infrastructure operators in Europe, and adopting technology to forecast and assure projects can lead to better outcomes for all of Britains rail industry, from contractors to passengers. I look forward to significantly delayed construction projects, and the disruption that they cause for passengers, becoming a thing of the past, with our railways becoming safer and more resilient.

Read more here:
Machine learning to transform delivery of major rail projects in UK - Global Railway Review

Recession Spurring Increased Adoption of Open Source Software According to Latest Yearly Survey by Tidelift – PRNewswire

BOSTON, Oct. 7, 2020 /PRNewswire/ --Use of open source software is expected to increaseduring the pandemic as businesses look to save time and money, while increasing efficiency, according to the third annual Managed Open Source Survey released today by Tidelift, the largest provider of commercial support and maintenance for the community-led open source behind modern applications.

More than 600 technologists shared how they use open source software today, what holds them back, and what tools and strategies would help them use it even more effectively.

"As the long-term move towards open source continues, our data shows that the recent economic downturn may be an accelerant," said Tidelift CEO Donald Fischer. "This finding continues a trend that began after the recession of the early 2000s and continued after the financial crisis of 2008. Organizations turn to open source in tough economic times because it helps them reduce costs and improves their ability to innovate."

Key Findings

Organizations are turning to open source during the COVID-19 recession to do more with less.

Yet using open source presents new challenges, which differ depending on company size.

Organizations take different approaches to contributing to open source.

The study also found the top three programming languages organizations rely on most are JavaScript, Python, and Java. JavaScript is used by over three-fourths of organizations (78%) while Python is used by just over half (52%). Java is used in applications far more often at larger organizations (66% vs. only 32% for the full sample).

As organizations continue to accelerate their use of open source and grapple with how to best choose, upgrade, and maintain this influx of new open source components, Tidelift simplifies the process. The Tidelift Subscription makes it easier for organizations to create and manage catalogs of known-good properly maintained open source components, while paying the maintainers who created them to keep them enterprise ready.

To receive a copy of the survey, go here. This marks the third year Tidelift has conducted a survey to answer the most pressing questions for technologists using open source to develop applications. This year's survey was conducted from May 28 through July 4, 2020. Participants were contacted via Tidelift and Changelog email lists and social media. Tidelift screened respondents to make sure they use open source to build applications at work, and the full survey sample was 638 respondents.

About TideliftTidelift is the largest provider of commercial support and maintenance for the community-led open source behind modern applications. The company partners with independent project maintainers to make it safer, easier, and more cost-effective for application development teams to build with open source, so they can create even more incredible software, even faster. The Tidelift managed open source solution delivers customizable catalogs of components that are actively maintained, secure, and accurately licensed, enabling development teams to build and deploy with confidence. Tidelift makes open source work betterfor everyone. https://tidelift.com/

SOURCE Tidelift

http://www.tidelift.com

Read more:
Recession Spurring Increased Adoption of Open Source Software According to Latest Yearly Survey by Tidelift - PRNewswire

SD Times Open-Source Project of the Week: Swift System – SDTimes.com

The Swift programming language team has announced its library for idiomatic interfaces is now open source. Swift System was first introduced in June for Apple platforms. It provides idiomatic interfaces to system calls and low-level types. As part of the announcement, it now includes Linux support.

Most operating systems today support some flavor of system interfaces written in C that have existed for decades. While it is possible to use these APIs directly from Swift, these weakly-typed system interfaces imported from C can be error-prone and unwieldy, Michael Ilseman, a engineer on the Swift Standard library team at Apple, wrote in a blog post.

The weakly-typed functions fail to utilize the expressivity and type safety of Swift because the semantic rules arent captured in the APIs signature, preventing the programming language from guiding the user towards correct usage of the API, according to Ilseman.

Meanwhile, the System module utilizes these various language features to bear expressivity and eliminate many opportunities for error.

For example, System defines the open system call as a static function with default arguments in the FileDescriptor namespace.

Mainly, System pervasively uses raw representable structs and option sets and the strong types help catch mistakes at compile time. Also, errors are thrown using the standard language mechanism and cannot be missed.

Last but not least, FilePath is a managed, null-terminated bag-of-bytes that conforms to ExpressibleByStringLiteral far safer to work with than a UnsafePointer.

Swifts immediate goal is to simplify building cross-platform libraries and applications such as SwiftNIO and the Swift Package Manager. This will include enhancements to FilePath and adding support for the recently announced Swift on Windows.

Read the rest here:
SD Times Open-Source Project of the Week: Swift System - SDTimes.com