Camp For Seriously Ill Children To Build Second Location On Marylands Eastern Shore – CBS Baltimore

BALTIMORE (WJZ) A well-known camp for sick children and their families will build its second location on Marylands Eastern Shore.

Paul Newmans Hole in the Wall Gang Camp announced plans Tuesday to open its second location in Queenstown, Md. at the Aspen Institutes former 166-acre Wye River Conference Center.

The Aspen Institute, a nonprofit for humanistic studies, is donating a majority of the property to Hole in the Wall. The property was gifted to the institute in 1988, so the organization is paying it forward.

Since 1979, the Wye River campus has played an important role in the Aspen Institutes history. This beautiful and protected site has hosted countless seminars and convenings, including some of international significance, saidDan Porterfield, President and CEO of the Aspen Institute. We are now proud to make available a significant part of this land to The Hole in the Wall Gang Camp to become their second location. Their mission to provide joy to children with serious illnesses and their families is inspiring, and secures a wonderful future for the Wye River campus.

The camp was founded in 1988 by legendary actor and entrepreneur Paul Newman to provide a different kind of healing to seriously ill children and their families, completely free of charge. The camp, based in Ashford, Connecticut, mostly serves families within a three-hour radius, as will the Maryland location.

The camp said the Maryland location is an ideal location because of its proximity to some of the United States most prominent pediatric hospitals, like the Johns Hopkins Childrens Center and the University of Maryland Childrens Hospital.

The Wye Conference Center has several residential buildings and other conference facilities that will be renovated to give the camp a starting point to begin programming.

The facilities and all programming will be designed to be family inclusive so that those most devastated and isolated by serious illnesses, including the rare disease community, will be able to find a caring community of support that understands their unique challenges, the camp said.

With its year-round programs, the camp serves 20,000 people annually, and it hopes to bring that impact to the Mid-Atlantic.

Lisa Nickerson and her son, Evan Bucklin, visited the camp when her daughter was ill.

When my daughter died, I was equally worried about Evan because emotionally it took such a toll, Nickerson said.

It truly is a one-of-a-kind experience, Bucklin said. It gives you a lot of the things kids in that age, in that environment really need- which is both friendship, an environment where they can have some fun it gave me some sense of power over my own life.

Construction and renovation at the camp is expected to be complete by Summer 2023, which is when programming will begin.

Original post:
Camp For Seriously Ill Children To Build Second Location On Marylands Eastern Shore - CBS Baltimore

Coastal Studies Institute to hold open house – The Coastland Times | The Coastland Times – The Coastland Times

East Carolina Universitys Integrated Coastal Programs (ECU ICP) and the Coastal Studies Institute (CSI) are hosting an open house from 14 p.m. on June 4, 2022 at the ECU Outer Banks Campus location in Wanchese. The public is welcome to attend this free event.

Attendees will be able to tour the campus, grounds and facilities, learn about current research and education programs, take part in family friendly activities and interact with faculty and staff from ECU, CSI and partner organizations.The LEED gold certified ECU Outer Banks Campus is located on Roanoke Island at 850 NC Highway 345, approximately one mile from the highway 64 and NC 345 intersection.

Located on the ECU Outer Banks Campus, ECUs Integrated Coastal Programs is a leader in coastal and marine research, education and engagement.The program uses an interdisciplinary approach and scientific advances to provide effective solutions to complex problems while helping coastal communities, ecosystems and economies thrive. ECU ICP includes a transdisciplinary Department of Coastal Studies, a PhD program in Integrated Coastal Sciences and the Coastal Studies Institute.

The Coastal Studies Institute is a multi-institutional research partnership led by East Carolina University, in association with NC State, UNC Chapel Hill, UNC Wilmington and Elizabeth City State University. CSI focuses on integrated coastal research and education programming centered on responding to the needs, issues and topics of concern of the residents of eastern North Carolina.

ECU ICP and CSI research and education initiatives span a variety of coastal topics from nearshore coastal estuaries to the offshore waters along the continental shelf. Visitors to the 2022 open house will learn about research initiatives first-hand from faculty and staff stationed throughout the facility.

Coastal geoscientists are researching the processes that drive coastal change, their effect on communities and ways to become more resilient in the face of increasing hazards that threaten the coast.

Ecologists are studying estuarine systems, their inputs and how people can ensure healthy coastal ecosystems for the future.

Oceanographers and coastal engineers are exploring ways to harness the power of the Gulf Stream, waves and other renewable ocean energy sources using new technologies to broaden North Carolinas energy portfolio.

Social scientists are working with coastal residents, visitors and relevantsocialstatistics to better understand the impacts coastal change has on communities, while working to develop new and prosperous economies for the future.

Maritime archaeologists are researching and discovering new shipwrecks using advanced technologies while celebrating the maritime heritage of eastern North Carolina.

Faculty and staff are engaging the local community and the next generation of scientists and decision makers in education programming that fosters student interest in the fields of technology, engineering, art, math and science.

ECU ICP and CSI welcome the public to take part in a fun-filled, engaging and educational event, stated the event announcement. Join us for the 2022 Open House on the ECU Outer Banks Campus at the Coastal Studies Institute from 1:00 4:00 pm on Saturday, June 4, 2022.

READ ABOUT MORE NEWS HERE.

Read the original here:
Coastal Studies Institute to hold open house - The Coastland Times | The Coastland Times - The Coastland Times

PDF to Excel conversion: Your ultimate guide to the best tools – Computerworld

In an ideal world, the data we need to analyze would be available in ready-to-use format. In the world we live in, though, a lot of valuable data is locked inside Portable Document Format (PDF) documents. How to extract that data from PDFs into an Excel spreadsheet? You have a number of PDF to Excel converters to choose from.

Theres software from major vendors like Microsoft and Adobe, task-specific cloud services including PDFTables and Cometdocs, services from general-purpose cloud providers such as Amazon, and even free open-source options.

Which is the best PDF to Excel converter? As with the best computer, the answer depends on your specific circumstances.

There are several important considerations when selecting a PDF converter.

1. Was my PDF generated by an application or is it a scanned image? There are two types of PDF files. One is generated by an application like Microsoft Word; the other comes from a scanned or other image file. You can tell which one you have by trying to highlight some text in the document. If a click and drag works to highlight text, your PDF is app-generated. If it doesnt, youve got a scan. Not all PDF conversion tools work on scanned PDFs.

2. How complex is the data structure? Almost every tool will work well on a simple one-page table. Things get more complicated if tables are spread over multiple pages, table cells are merged, or some data within a table cell wraps over multiple lines.

3. Do I have a large volume of files that need batch file conversions or automation? Our best-performing tool on app-generated PDFs may not be the best choice for you if you want to automate frequent batch conversions.

In addition, as with any software choice, you need to decide how much you value performance versus cost and ease of use.

To help you find whats best for your tasks, we tested seven PDF to Excel conversion tools using four different PDF files ranging from simple to nightmare. Youll see how all the tools perform in each scenario and find out the strengths and weaknesses of each one.

Here are the tools we tested, starting with our overall best performers (but remember that best depends in part on the specific source document). All these tools did pretty well on at least some of our tasks, so rankings range from Excellent to Good.

As the creator of the Portable Document Format standard, youd expect Adobe to do well in parsing PDFs and it does. A full-featured conversion subscription is somewhat pricey, but theres also an inexpensive $2/month plan (annual subscription required) that includes an unlimited number of PDF to Excel conversions. (You can output Microsoft Word files with this tool as well).

The Excel conversions include any text on pages that have both text and tables. This can be a benefit if youd like to keep that context or a drawback if you just want data for additional analysis.

Rating: Excellent our hands-down winner for non-scanned PDFs.

Cost: $24/year

Pros: Outstanding results; preserves much of the original formatting; deals well with tables spanning multiple pages; unlimited conversions of files up to 100MB; affordable for frequent users.

Cons: No built-in scripting/automation workflow; expensive if you only convert a few documents a year.

Bottom line: If you dont need to script or automate a lot of conversions and dont mind paying $24 per year, this is a great choice.

For an AWS cloud service, Textract is surprisingly easy to use. While you certainly can go through the usual multi-step AWS setup and coding process for Textract, Amazon also offers a drag-and-drop web demo that lets you download results as zipped CSVs. You just need to sign up for a (free) Amazon AWS account.

Rating: Excellent this was our best option for a complicated scanned PDF.

Cost: 1.5 cents per page (100 pages per month free for your first three months at AWS)

Pros: Best option tested for a complicated scanned PDF; performed extremely well on all the app-generated PDFs; offers a choice of viewing results with merged or unmerged cell layout; easy to use; affordable.

Cons: Uploaded files are limited to 10 pages at a time. For those who want to automate, using this API is more complicated than some other options.

Bottom line: An excellent choice if you dont mind the AWS setup and either manual upload or coding with a complex API.

If youre looking for free and open source, give Tabula a try. Unlike some free options from the Python world, Tabula is easy both to install and to use. And it has both a command-line and a browser interface, making it equally useful for batch conversions and point-and-click use.

Tabula did very well on PDFs of low or moderate complexity, although it did have an issue with the complex one (as did many of the paid platforms). Tabula requires a separate Java installation on Windows and Linux.

Rating: Very good and you cant beat the price.

Cost: Free

Pros: Free; easy to install; has both a GUI and scripting options; allows you to manually change what areas of the page should be analyzed for tables; can save results as a CSV, TSV, JSON, or script; offers two different data extraction methods.

Cons: Needed some manual data cleanup on complex formatting; works on app-generated PDFs only.

Bottom line: A good choice if cost, ease of use, and automation options are high on your list of desired features and your PDFs aren't scanned.

A key advantage to this service is automation. Its API is well documented and supports everything from Windows PowerShell and VBA (Office Visual Basic for Applications) to programming languages like Java, C++, PHP, Python, and R.

PDFTables performed well on most of the app-generated PDF tables, even understanding that a two-column header would be best as a single-column header row. It did have some difficulty with data in columns that were mostly empty but also had some data in cells spread over two lines. And while it choked on the scanned nightmare PDF, at least it didnt charge me for that.

Rating: Very good overall; excellent on automation.

Cost: 50 pages free at signup including API use. After that its $40 for up to 1,000 pages, and your credits are only good for a year.

Pros: Very good API; better performance on the moderately complex PDF than several of its paid rivals.

Cons: Pricey, especially if you use more than the 50 free pages but less than 1,000 page conversions in a year. Doesnt work on scanned PDFs.

Bottom line: Performs well and is easy to use both on the web and through scripting and programming. If you dont need an elegant API, however, you may prefer a less expensive option.

This is a freemium platform with paid options. It proved to be the lone free choice that was able to handle our scanned nightmare PDF.

Rating: Good.

Cost: Free in the cloud, $5/month or $49/year premium cloud for batch conversions and faster service, desktop software $35 for 30-day use or $150 lifetime.

Pros: A lot of capability for the free option; works on scanned PDFs; affordable.

Cons: No API or cloud automation (we didnt test the desktop software); paid option required for batch conversions; split single-row multi-line data into multiple rows.

Bottom line: Nice balance of cost and features. This was most compelling for complex scanned PDFs, but others did better when cell data ran across multiple lines.

This web-based service is notable for multiple file format conversions: In addition to generating Excel, it can download results as Word, PowerPoint, AutoCAD, HTML, OpenOffice, and others. Free accounts can convert up to five files per week (30MB each); paid users get an unlimited number of conversions (2GB/day data limit).

Cometdocs is a supporter of public service journalism; the service offers free premium accounts to Investigative Reporters & Editors members (disclosure: I have one).

Rating: Good.

Cost: 5 free conversions/week; otherwise $10/month, $70/year or $130 lifetime.

Pro: Works on scanned PDFs; multiple input and output formats; generally good results; did extremely well on a 2-page PDF with complex table format.

Cons: Not as robust on complex scanned PDFs as some other options; split one rows multi-line data into multiple rows; no clear script/automation option.

Bottom line: Particularly compelling if you're interested in multiple format exports and not just Excel.

Many people dont know that Excel can import PDFs directly but only if youve got a Microsoft 365 or Office 365 subscription on Windows. It was a good choice for the simple file but got more cumbersome to use as PDF complexity rose. Its also likely to be confusing to people who arent familiar with Excels Power Query / Get & Transform interface.

How to import a PDF directly into Excel: In the Ribbon toolbar, go to Data > Get Data > From File > From PDF and select your file. For a single table, youll likely have one choice to import. Select it and you should see a preview of the table and an option to either load it or transform the data before loading. Click Load and the table will pop into your Excel sheet.

For a single table on one page, this is a quick and reasonably simple choice. If you have multiple tables in a multi-page PDF, this also works well as long as each table is confined to one page. Things get a bit more complex if youve got one table over multiple PDF pages, though, and youll need knowledge of Power Query commands.

Its somewhat unfair to compare Power Query data transformation with the other tools, since the results of any of these other PDF to Excel converters could also be imported into Excel for Power Query wrangling.

Rating: Good.

Cost: Included in a Microsoft 365/Office 365 Windows subscription.

Pro: You dont have to leave Excel to deal with the file; a lot of built-in data wrangling available for those who know Power Query.

Cons: Complex to use compared with most others on all but the simplest of PDFs; doesnt work on scanned PDFs; requires a Microsoft 365/Office 365 subscription on Windows.

Bottom line: If youve already got Microsoft 365/Office 365 on Windows and youve got a simple conversion task, Excel is worth a try. If you already know Power Query, definitely consider this for more PDF conversions! (If you dont, Power Query is a great skill to learn for Excel users in general.) If your PDF is more challenging and you dont already use Power Query / Get & Transform, though, youre probably better off with another option.

Heres how the seven tools fared in our four conversion tests:

Our simple task was a single-page app-generated PDF pulled from page 5 of a Boston housing report. It contained one table and some text, but column headers and two data cells did include wrapped text over two lines.

All the platforms we tested handled this one well. However, several broke up the multi-line text into multiple rows. The issue was easy to spot and fix in this example, but this issue could be difficult in larger files. For this easy one-pager, though, the PDF to Excel converters that werent in first or second place still had very good results. All were worth using for this type of conversion.

First place: Tie Adobe and AWS Textract. With Adobe, no data cleanup was needed. The column headers even had the color formatting of the original. Adobes conversion included text (with lovely formatting), which is useful if you want to keep written explanations together with the data in Excel. Youd need to delete the text manually if you want data only, but thats simple enough.

AWS Textract converted data only. No data cleanup was needed.

Close second: Excel. Data only. Excel didnt break wrapped text into two rows, but it did appear to run text together without a space with multi-line rows. The data was actually correct, though, when you looked at it in the formula bar it just looked wrong in the overall spreadsheet. This was easily fixed by formatting cells with "wrap text." However, not everyone might know to do that when looking at their spreadsheet.

Others:

PDFTables: returned data and text. Same issues as Excel with appearing to keep wrapped text in a single line without a space between words. This was also easily fixed by wrapping text, if you knew to do so. This result also would need cleanup of a couple of words from a logo that appeared below the data. Explanatory text outside the logo had no problems, though.

Tabula: data only. Split multi-line cells into multiple rows.

Cometdocs: data and text. Split multi-line cells into multiple rows. Surrounding text was accurate, including logo text.

PDFtoExcel.com: similar performance to Cometdocs.

Our moderate PDF challenge was a single app-generated table spanning multiple PDF pages, via the Boston-area Metropolitan Water Resources Authority data monitoring wastewater for Covid-19 traces.

First place: Adobe. One of the few to recognize that all the pages were the same table, so there were no blank rows between pages. Headers were in a single row and spaces between words in the column names were maintained. Data structure was excellent, including keeping the multi-line wrap as is. It even reproduced background and text colors. The 11-page length wasnt a problem.

Second: AWS Textract. Header row was correct. Each page came back as a separate table, although it would be easy enough to combine them. The one strange issue: There were apostrophes added at the beginning of the cells possibly due to how I split the PDF, since I needed to create a file with only 10 pages. However, those apostrophes were easy to see and remove with a single search and replace, since the data didnt include any words with apostrophes. It was easier to get the exact data I needed than with Tabula, but more cumbersome to get the full data set.

Close third: Tabula. No blank rows between pages, data in the correct columns, wrapped cells stayed in a single row. Unfortunately, while the wrapped data appeared properly when you looked at the cell contents in the formula bar, once again the data appeared to merge together in the full spreadsheet and this wasnt as easily fixed by formatting with text wrapping as with Excel and PDFTables in the simple PDF.

For example, this was the content of one cell as it appeared in the formula bar:

B.1.1.7

76%

But in the overall spreadsheet, that same cell looked like

B.1.1.776%

I was able to get that to display properly at times by increasing the row height manually, but this was an added step that most people wouldnt know to do, and it didnt seem to work all the time.

Others:

PDFtoExcel.com: multiple problems. The first few pages were fine except for multi-row headers, but data over two lines in single cells broke into two rows in the data, generating blank rows elsewhere that would need to be fixed. In addition, columns were shifted to the right in one section. This would need cleanup.

PDFTables: multiple problems. All the data came in fine for most of the pages, but toward the end, a few cells that should have been in column J got merged with column I in ways that would be more difficult to fix than PDFtoExcels. For example, this single cell:

Omicron

559 23%

Was supposed to be 559 in one cell and Omicron 23% in the next cell.

Cometdocs: failed. Conversion failed on the full PDF and even the 10-page version I uploaded to AWS. It was able to convert a version with just the first 5 pages, but the full file should have been well below Cometdocs account limits.

Excel: it was possible to get the data in a format I wanted, but it required data manipulation in Power Query as well as wrapping text. Thats not a fair comparison with other platforms that were a single upload or command. Still, results were ultimately excellent. If youre an Excel/Power Query power user, this is a good choice.

Local election results are some of my favorite examples of analysis-hostile public data. The app-generated PDF from Framingham, Mass. shown below was only 3 pages but with table formatting that was not designed for ease of data import. Is there a PDF conversion tool that can handle it?

Page 1 of the PDF showing recent election results for Framingham, Mass. (Click image to enlarge it.)

First place: Tie Adobe and PDF to Excel. Adobe returned an Excel file in perfect format, complete with original cell colors.

While PDFtoExcel.coms spreadsheet didnt have the pretty formatting of Adobe, all the data came in accurately, and it was usable as is.

Others:

AWS Textract: fair. Results came back in 5 tables. In one case, youd need to copy and paste them together manually and look at the original to make sure you were doing so correctly.

PDFTables: poor. Data came back, but some in the wrong columns, whether I tried to download as multiple sheets or one sheet. This would need manual checking and cleanup.

Tabula: poor. Similar problem as PDFTables with some data in the wrong columns, but at least I didnt have to pay for it. I tried both the Stream and Lattice extraction methods, and both had some wrong-column issues (although the issues were different).

Cometdocs: conversion failed.

Our nightmare comes courtesy of a presentation at this year's National Institute for Computer Assisted Reporting conference, as an example of data that would be useful for training students if it was in a format that could be easily analyzed. Its a multi-page scanned PDF with four months of data from the federal Refugee Processing Center on refugee arrivals by country of origin and U.S. state of destination.

This PDFs challenges range from multi-page tables to lots of merged columns. In addition, the table on page 1 proved to be somewhat different than tables on the other pages, at least in terms of how several tools were able to handle them, although they look the same.

I only tested the first 10 pages due to the AWS 10-page limit, to be fair to all the tools.

Read this article:
PDF to Excel conversion: Your ultimate guide to the best tools - Computerworld

Boys & Girls Clubs set to open eight new sites throughout Northeast Ohio this summer – cleveland.com

CLEVELAND, Ohio-- Eight new Boys & Girls Clubs are expected to open across the region this summer, including sites in Cuyahoga, Summit, Lorain and Huron counties.

The expansion is the first step in the strategic plan of the Boys & Girls Clubs of Northeast Ohio to provide more youths with greater experiences and the opportunities they desire, said Jeff Scott, the CEO of the organization.

The first set of new clubs will open next month and will be school-based sites funded by stimulus dollars and the Ohio Department of Education.

Some of the sites will be in Euclid, Cuyahoga Falls, Garfield Heights and Akron. The organization also plans to open its first club in Huron County at New London Elementary School on June 13.

Nationally, the Boys & Girls Clubs aim to provide safe, fun places for children 6-18 to go after school and in the summer.

In Northeast Ohio, Scotts group serves children in Cuyahoga, Summit, Lorain and Erie counties. The non-profit organization operates 40 clubs throughout in the region.

After the pandemic, many school systems are pivoting back to the traditional after-school models or taking a hybrid approach. Because of that, Scott said, the organization had a duty to do more to help youths across Northeast Ohio.

There is no charge to join a club. Memberships are free and open for youth ages 6-18. Activities include athletics, academic help, arts and music programming, leadership opportunities, field trips and breakfast and lunch daily.

In the future, Scott said he hopes donations from individuals and corporations will allow the organization to create more stand-alone clubs across the region. Ultimately, he wants to add another 15 sites across Northeast Ohio.

Our youth deserve our unrelenting efforts. We can never stop. Never sit still. This is the first step in continuing to do more for our youth, he said.

You can learn more about Boys & Girls Clubs and access membership registration forms by visiting http://www.bgcneo.org. To access the summer membership form, go to tinyurl.com/BGCNEOsummer.

The new clubs will be at:

Akron Buchtel Community Learning Center, for students who have completed sixth through 12th grades. The starting date has not been determined.

Akron North High School, for teenagers in ninth through 12th grade. The starting date has yet to be determined.

Cuyahoga Falls Preston Elementary School, for children who have completed kindergarten through fifth grade. The starting date has not been set.

Ely Elementary School in Elyria, for children who have completed kindergarten through fifth grade. The club starts June 27.

Euclid Middle School, for children ages 6-18. The starting date has yet to be determined.

Garfield Heights High School, for children who have completed kindergarten through seventh grade. The club begins June 20.

Longfellow Middle School in Lorain, for children who have completed kindergarten through seventh grade. The club begins June 13.

New London Elementary, for those who have completed kindergarten through seventh grade. The club begins June 13.

More:
Boys & Girls Clubs set to open eight new sites throughout Northeast Ohio this summer - cleveland.com

Jamstack pioneer Matt Biilmann on Web 3, Deno, and why e-commerce needs the composable web – DevClass

Interview Matt Biilmann, co-founder of Netlify and one of the originators of the Jamstack (JavaScript, API and Markup) approach to web development, spoke to Devclass at the Headless Commerce Summit, which is underway this week in London.

What is the technical argument behind Jamstack?We saw this shift happening in the core architecture of the web, Biilmann tells us, from a world where every website, every web application was a monolithic application with templates, business logic, plug-in ecosystem, data access, all grouped together we were fortunate to predict the shift towards decoupling the web UI layer from theback endbusiness logic layer, and having thatback endbusiness logic layer split into all these different APIs and services, where some of them might be owned today, typically running in a Kubernetes cluster somewhere, but a lot of them [were]other peoples services like Stripe, Twilio,Contentful,Algoliaand so on.

He adds: We saw the opportunity to build anend to endcloud platform around the web UI layer and we coined the termJamstack.

How important was the idea of reducing the role of the web server, in favour of static pages and API calls?

Before, the stack was like your server, your web server, your operating system, yourback endprogramming, language. Now the stack is really what you deliver to the browser and how it communicates withdifferentAPIsand services, he says.

The important piece for us was this step up in where the abstraction is.Are you working with an abstraction thats your Apache server here, your PHP program, your Ruby program? Or are you working more in the abstraction of developing a web UI,we can preview it locally, we can see how itlooks when we push it to git, it will automatically go live.It was a step up from having to worry about all the underlying components, Biilmann says.

Netlifys platform includes the idea of having a content delivery network built in, so that static content is always served with low latency. It now includes the concept of Edge functions, code that runs server-side but close to where the user is located and with a serverless architecture. Is this any different from what Cloudflare is doing with its Pages and Workers, or other providers that have adopted this model?

Were thinking about it differently, said Biilmann. I would say that Cloudflare is really building their own application platform a very Cloudflare-specific ecosystem where if you build an app for that application platform, you build it for Cloudflare. When it comes to our Edge functions, we spent a long time thinking about, how do we avoid making that a proprietary Netlify layer.That was why we started collaborating with the Deno team, who areactually workingon anopen sourceruntimefor that kind of layer.

Where we think we created lock-in is just in terms of delivering developer productivity that makes our customers stay, he added.

Why Deno and not the better-known Node.js runtime? Both were made by the same person, Ryan Dahl, Biilmann says. He had this interesting observation that there was a growing need for developers to be able to program asynchronous IO,andthat was very hard to do in any of the existing dynamic runtimes like Ruby, PHP, Python.

He saw that if he made a new language around JavaScript and built it around asynchronous APIs from the beginning, its an advantage. That drove the adoption and success of Node, and what hes doing now with Deno is very similar.

The difference with Deno, Biilmann says, is that Dahl now sees that everybody wants to start runningtheir code in these new environments, where instead of running your code on your own server in your own data center, you want to take your code and have providersjust distribute them all over the world and run your code for you.

Node libraries often come with native code dependencies, Biilmann said, which breaks this kind of deployment. Features like deeper TypeScript integration and use of ECMAScript modules also make Deno attractive.

Why is the Jamstack, headless approach important for ecommerce? This idea of bringing the web UI very close to the user, either pre-building as static assets or having it run with edge functions close to the user, means huge benefits in performance, and no one is more aware of the huge difference performance makes to conversion rates than the ecommerce operators. Its just so well proven by studies, Biilmann says.

Should developers care more about Jamstack, or Web 3? Theres an immense hype around with Web 3, and I think some of the ideas are really interesting, the idea of being able to bring your data with you to applications instead of putting your data into applications and giving it away, says Biilmann.

Most of those applications are built with the Jamstack approach but if you look at the number of developers on Ethereum or Solid, thats a smaller number than the developers signing up for Netlify every week.

Theres alot ofideas there that are very aligned with our idea of what the open web means and whats good for the web. But I think they are often artificially coupled to cryptocurrencies and blockchain, and it getsvery hardto differentiate.

Continue reading here:
Jamstack pioneer Matt Biilmann on Web 3, Deno, and why e-commerce needs the composable web - DevClass

Learn React: Start of a Frontend Dev Journey – thenewstack.io

Hello! Welcome to the first article in a series of tutorials that will focus on learning React.js. This is a weekly series, and after this brief introduction, will center around building a to-do list application from scratch. I chose a to-do list as it includes all the foundational application building blocks needed in a basic CRUD application.

Before getting into what React is, here are some recommended prerequisites, as defined by Google:

When I learned React, I was a master of exactly none of these topics. I dont want to mislead anyone though, I was at Codesmith and learned React in the structured school environment. By this time, I studied algorithms and basic data structures for about five months and had a fledgling knowledge of the DOM and HTTP requests. My HTML was meh at best and my CSS was a disaster. Absolutely no divs were centered before this time period.

One last word from the wise(ish): The more working knowledge you have prior to exploring React, the more ease you may find with this, but no one can define what learning will look like for you. Many articles and video tutorials say learning React is easy but that is in comparison to heavier frontend libraries and frameworks. Easy was not my experience. Dont be discouraged if it isnt yours either. Im happy youre here and I hope you stay! Now, shall we?

Facebook developer Jordan Walke created the React.js frontend JavaScript library, as a way to help developers build user interfaces with components. A library is a collection of prewritten functions and code that reduce development time and provide popular solutions for common problems.

Inspired by XHP (an HTML component library for PHP), React was first deployed on Facebooks news feed in 2011 followed by Instagram in 2012. The library was open sourced at JSConf US in May of 2013.

React is open source, meaning it is completely free to access. Developers are encouraged to modify and enhance the library.

React adheres to the declarative programming paradigm. Developers design views for each state of an application and React updates and renders components when data changes.

Documentation: React has a proper maintenance team via the engineers who actively work on React. As result, React.js is incredibly professional. They have docs on docs on docs. Do you need to find something that isnt in the React docs or do you want to search for something super specific in Google? Well, that is no problem! Enter Stack Overflow or the numerous blog posts (hello) that are also here to help you. Ive worked with technologies that have a large footprint and those with a very small one. The larger the footprint, the more ease and independent the coding experience is.

Vast Career Potential: Uber, Bloomberg, Pinterest, Airbnb, and Skype are just a few companies that use React. The popularity is growing as more companies and Google estimates the average earnings for a React developer is $119,990 in the US.

Longevity: Any time a library is used, theres a risk that maintenance could be discontinued. It happens all the time. So when choosing a library its best to select one with such a large community. I hope its clear by now that React has one. Updates are still current after 10 years and popularity is only growing. Projects and skills are safe here.

One of the things I valued most about learning from my instructors at Codesmith was that they taught me to use the proven engineering tools at my disposal. React works. Its optimized for performance and effectiveness yet leaves so much room for creativity. Some of the greatest engineering minds put their best effort into building this library. I dont have to build my applications from scratch and can lean on these tools and libraries when it suits the project.

Leaning on a library, framework, or template isnt cheating. Its solid engineering. Engineering isnt taking the hardest, most laborious path forward in my opinion. It is solving a challenge the best way possible with the most optimized solution that you know of at that time. And now I would like to present to you, a very lean, mean, optimized frontend machine.

In the next article, I will cover the following topics: state, components, JSX, how to render JSX to the browser, how to set up the files in an IDE.

Read more here:
Learn React: Start of a Frontend Dev Journey - thenewstack.io

Machine learning hiring levels in the medical industry fell to a year-low in April 2022 – Medical Device Network

The proportion of medical companies hiring for machine learning related positions dropped in April 2022 compared with the equivalent month last year, with 28.3% of the companies included in our analysis recruiting for at least one such position.

This latest figure was lower than the 32.9% of companies who were hiring for machine learning related jobs a year ago and a decrease compared to the figure of 38.4% in March 2022.

When it came to the rate of all job openings that were linked to machine learning, related job postings rose in April 2022, with 0.9% of newly posted job advertisements being linked to the topic.

This latest figure was the highest monthly figure recorded in the past year and is an increase compared to the 0.8% of newly advertised jobs that were linked to machine learning in the equivalent month a year ago.

Machine learning is one of the topics that GlobalData, from whom our data for this article is taken, have identified as being a key disruptive force facing companies in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

Our analysis of the data shows that medical companies are currently hiring for machine learning jobs at a rate lower than the average for all companies within GlobalData's job analytics database. The average among all companies stood at 1.3% in April 2022.

GlobalData's job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as they're posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

You can keep track of the latest data from this database as it emerges by visiting our live dashboard here.

Precision Components for the Medical Industry

Read more here:
Machine learning hiring levels in the medical industry fell to a year-low in April 2022 - Medical Device Network

Madrona and PitchBook Partner to Bring Machine Intelligence to the Intelligent Applications 40 (#IA40) List – Business Wire

SEATTLE--(BUSINESS WIRE)--Madrona, a leading venture investor in artificial intelligence and machine learning companies, today announced a partnership with PitchBook to power the 2022 Intelligent Applications 40, #IA40, and released data based on the 2021 list. Leveraging PitchBooks industry-leading data as well as a new machine learning model, PitchBook and Madrona provide differentiated analysis on the market outlook for intelligent applications.

According to PitchBook, IA40 companies have, in aggregate, raised over $3 billion in new rounds of financing since the launch of the inaugural IA40 list in late November 2021. Additionally, despite the current market turmoil, these companies have announced over $848 in new venture financing in the second quarter further reinforcing the promising market outlook for intelligent applications long term. The IA40 companies will need to navigate the same challenging market conditions faced by all VC-backed startups, but they show promise when applying the PitchBook predictive algorithm:

We are excited to bring PitchBook on board as we look to the #IA40 2022 to be released this fall. Madrona has been investing in the founders and teams building intelligent applications for over ten years. We believe machine intelligence is the future of software, commented Ishani Ummat, Investor at Madrona. What better way to help generate a meaningful list of intelligent app companies than to leverage machine learning and predictive software in the process?

Madrona launched the inaugural IA40 with support from Goldman Sachs and 50 of the nations top venture firms in the fall of 2021. A ranking of the top 40 intelligent application companies, the list spans early to late-stage private companies across all industries. Intelligent apps harness machine learning to process historical and real-time data to create a continuous learning system. Companies on the inaugural list include Starburst, Gong, Hugging Face, OctoML, SeekOut and Abnormal Security. See the full list at http://www.ia40.com

PitchBook is well-known for delivering timely, comprehensive, and transparent data on private and public equity markets collected through its proprietary information infrastructure. In addition to distributing data and research, PitchBooks Institutional Research Group also develops tools and models that help clients make more informed investment and business development decisions. The algorithm powering the 2022 IA 40 list is part of a larger initiative that will enable PitchBook users to predict liquidity events for private companies and will be launched later this year.

At PitchBook, were constantly expanding our data and research across all asset classes and building tools to actively surface insights for our clients. Combining our data and insights with machine learning capabilities, were in a unique position to predict outcomes and enhance decision-making for our core clients. Our work with Madrona and the IA 40 is a powerful example of the possibilities associated with intelligent applications and applying the technology to lead to better outcomes for our industry, commented Daniel Cook, CFA and Head of Quantitative Research at PitchBook.

Read more on our blog about the partnership.

Interested in the IA40 founders and companies? Check out our podcasts with founders from Starburst, RunwayML, Hugging Face, OctoML, and SeekOut, with more on the way! https://www.madrona.com/category/podcast/

About Madrona

Madrona (www.madrona.com) is a venture capital firm based in Seattle, WA. With more than 25 years of investing in early stage technology companies, the firm has worked with founders from Day One to help build their company for the long run. Madrona invests predominantly in seed and Series A rounds across the information technology spectrum, and in 2018 raised the first fund dedicated to initial investments in acceleration stage (Series B and C stages) companies. Madrona manages over $2 billion and was an early investor in companies such as Amazon, Smartsheet, Isilon, Redfin, and Snowflake.

Original post:
Madrona and PitchBook Partner to Bring Machine Intelligence to the Intelligent Applications 40 (#IA40) List - Business Wire

How we learned to break down barriers to machine learning – Ars Technica

Dr. Sephus discusses breaking down barriers to machine learning at Ars Frontiers 2022. Click here for transcript.

Welcome to the week after Ars Frontiers! This article is the first in a short series of pieces that will recap each of the day's talks for the benefit of those who weren't able to travel to DC for our first conference. We'll be running one of these every few days for the next couple of weeks, and each one will include an embedded video of the talk (along with a transcript).

For today's recap, we're going over our talk with Amazon Web Services tech evangelist Dr. Nashlie Sephus. Our discussion was titled "Breaking Barriers to Machine Learning."

Dr. Sephus came to AWS via a roundabout path, growing up in Mississippi before eventually joining a tech startup called Partpic. Partpic was an artificial intelligence and machine-learning (AI/ML) company with a neat premise: Users could take photographs of tooling and parts, and the Partpic app would algorithmically analyze the pictures, identify the part, and provide information on what the part was and where to buy more of it. Partpic was acquired by Amazon in 2016, and Dr. Sephus took her machine-learning skills to AWS.

When asked, she identified accessasthe biggest barrier to the greater use of AI/MLin a lot of ways, it's another wrinkle in the old problem of the digital divide. A core component of being able to utilize most common AI/ML tools is having reliable and fast Internet access, and drawing on experience from her background, Dr. Sephus pointed out that a lack of access to technology in primary schools in poorer areas of the country sets kids on a path away from being able to use the kinds of tools we're talking about.

Furthermore, lack of early access leads to resistance to technology later in life. "You're talking about a concept that a lot of people think is pretty intimidating," she explained. "A lot of people are scared. They feel threatened by the technology."

One way of tackling the divide here, in addition to simply increasing access, is changing the way that technologists communicate about complex topics like AI/ML to regular folks. "I understand that, as technologists, a lot of times we just like to build cool stuff, right?" Dr. Sephus said. "We're not thinking about the longer-term impact, but that's why it's so important to have that diversity of thought at the table and those different perspectives."

Dr. Sephus said that AWS has been hiring sociologists and psychologists to join its tech teams to figure out ways to tackle the digital divide by meeting people where they are rather than forcing them to come to the technology.

Simply reframing complex AI/ML topics in terms of everyday actions can remove barriers. Dr. Sephus explained that one way of doing this is to point out that almost everyone has a cell phone, and when you're talking to your phone or using facial recognition to unlock it, or when you're getting recommendations for a movie or for the next song to listen tothese things are all examples of interacting with machine learning. Not everyone groks that, especially technological laypersons, and showing people that these things are driven by AI/ML can be revelatory.

"Meeting them where they are, showing them how these technologies affect them in their everyday lives, and having programming out there in a way that's very approachableI think that's something we should focus on," she said.

Read this article:
How we learned to break down barriers to machine learning - Ars Technica

Ruth Mayes Walks Through the Ins and Outs of Machine Learning – Seed World

How much do you know about machine learning and how it can be applied to plant breeding? Its a complicated subject, but Computomics, a bioninformatics data analysis company for plant breeding based in Germany, sat down at the International Seed Federations World Seed Congress to help us understand more about it.

Computomics was founded 10 years ago, and our co-founder was one of the first scientists to apply machine learning capabilities to biological datasets, says Ruth Mayes, director of global business strategy of Computomics.

And now, 10 years down the line from its founding, Computomics offers an innovative predictive breeding technology which allows breeders to identify genetics within a crops germplasm, find a target and breed forward.

We take field data and correlating genetic markers to build a model using machine learning. And we look at combinations of genetic markers, how these markers combine together, and how it influences the phenotype. says Ruth.

But why is machine learning a game changer for plant breeding?

It allows the breeder to go away from just testing his germplasm to actually understanding all the elite genetics within his germplasm, Mayes says. This allows him to really define a target which is a trait or feature and really breed towards it

Make sure to visit Computomics website to learn more about their innovative machine learning technology to help plant breeders achieve best possible future crop varieties.

Originally posted here:
Ruth Mayes Walks Through the Ins and Outs of Machine Learning - Seed World