Will C-3PO die in Star Wars: The Rise of Skywalker? – Headlinez Pro

Neatly-known person Wars Episode IXs final trailer is elephantine of fable moments, poignant farewells (that Leia quote!) and unfavorable returns (The Emperor!) but does it furthermore predict the demise of one among the franchises customary characters?

Thats the quiz many fans were left with after the trailer seemed as if it would inform Anthony Daniels long-suffering droid C-3PO bidding farewell to his fellow travellers while wired into some equipment.

What are you doing there, Threepio? Oscar Isaacs Poe Dameron asks.

Taking one closing look, sir at my chums, he responds.

Given C-3POs most regularly sarcastic or rather of cowardly perspective, its an strangely staunch second for the worn droid, while his spot his headpiece eradicated, his mind wired into one thing would perhaps perhaps counsel hes about to strive to narrate expend watch over over some accept as true with of open air technology or spaceship, as has been demonstrated within the franchise earlier than (particularly, Phoebe Waller-Bridges L3-3T ended up uploading her mind into the Millennium Falcon in Solo: A Neatly-known person Wars Narrative the usage of a connected components).

Fans completely seem to think hes up for the nick at least

Cant get cling of this out of my head ???????? #TheRiseOfSkywalker #C3PO pic.twitter.com/K49CEjxvDy

Chopper (@shadowthunder61) October 22, 2019

C3PO no doubt said taking one closing look sir.. at my chumspic.twitter.com/O86WRRRkDg

adam (@ehhitsadam) October 22, 2019

I never conception Id be so emotionally compromised thanks to C-3PO but here we are #TheRiseOfSkywalker pic.twitter.com/C5oZsfeV30

sara ???????? (@sarareneexo) October 22, 2019

when c3po says he desires to expend a closing take a examine his chums pic.twitter.com/dIq5nnH8Su

pete ???? (@spideylovebot) October 22, 2019

However are they suitable? Is Threepio about to accept as true with an fable sacrifice, destroying his body (or now now not lower than his mind) in a bold effort to attach his chums?

Neatly, its imaginable footage shown of the film so a long way does seem to inform the droid having a more central feature within the fable than he has in some time, travelling with Poe, Finn and Rey (Osaac, John Boyega and Daisy Ridley), suggesting hell be coming into more perilous climes again.

Plus, theres that shot of him within the old trailer with ominous purple eyes, it sounds as if within the connected situation, suggesting that he hooks as much as a couple open air influence all over Episode IX, perhaps at his accept as true with risk.

And albeit, narratively, it makes supreme sense. Daniels C-3PO is one among the finest closing customary characters from the customary trilogy left, and Daniels himself has the queer distinction of performing in every Neatly-known person Wars film so a long way (though in Solo, he wasnt taking half in C-3PO).

Anthony Daniels is C-3PO in STAR WARS: THE RISE OF SKYWALKER

Because the Skywalker saga within the extinguish involves an finish, in all likelihood JJ Abrams has determined that the personality who got here to lifestyles in Episode I would perhaps perhaps restful been laid to relaxation in Episode IX, marking the finish of this specific fable alongside with his passing and sending a signal to viewers that no matter what number of Disney sequence or new trilogies are on the style, here is restful the finish of one thing.

So yes, its eminently imaginable even probable that C-3PO is on his components out. Now all now we possess to attain is solve the thriller of why hes without be aware determined Finn, Rey and Poe are his chums after barely interacting with them within the old two movies

Neatly-known person Wars Episode IX: The Upward push of Skywalker is released in UK cinemas on the 19th December

Continue reading here:

Will C-3PO die in Star Wars: The Rise of Skywalker? - Headlinez Pro

The teens don’t want to love TikTok – The Outline

I first encountered the TikTok personality Angel Mamii when I stumbled across a video er, a TikTok of her running through a Walmart aisle, frantically explaining to her son that they need chargers because, in her words, all my chargers are missing. The entire clip, from start to finish, is a stiffly staged comedy sketch with an ending so abrupt it acts as its own twist, leaving the viewer wondering not, What happens next? but What just happened at all?

The confusing dialogue, clunky acting, and randomized narrative of the scene could be read as an argument in favor of contemporary filmmaking techniques by way of showing what it would be like if we had none of them. So, naturally, Mamiis TikToks have been hailed as dadaist sensations and, following an endorsement from Barstool Sports, she and her partner now sport merch featuring the media companys logo (a stool), in her clips.

While Angel Mamiis videos are truly unique (and terrible) iterations of the medium, they are proof that TikTok, the social media app that allows users to record, edit, post, and remix videos ranging from a few seconds to a minute, isnt just a platform for Gen Z-ers to post videos of themselves doing vape pranks, but an absurd platform on which users create confusing feedback loops of content, ever mimicking themselves and reality into asininity.

Pages upon digital pages have already been written about TikTok, usually surrounding conversation around youth culture, the simultaneous terror and joy that social media inspires, and the embarrassing attempts of older generations to keep up. From compilation videos and Twitter threads to articles deconstructing the apps most popular trends and how the teens are handling micro-fame, Gen Z is commended for their creativity and resourcefulness. Its one of the most important companies on the planet, and its at the forefront of the possible future of social media, Vox asserted. The app, The Week gushed that TikTok, has captured Generation Z in both its structure and its culture, [reflecting] the conditions of their lives. The deluge of uncritical media coverage of the app is perhaps best summed up by the gibberish New York Times headline: High Schools to TikTok: Were Catching Feelings.

Why is a highly curated platform with mysterious algorithmic forces and out-of-sight data collection seen as indicative of anything, let alone the preferences of an entire generation? Teenagers have been filming videos of themselves doing weird and/or dumb shit for years, and theyve been uploading those videos of themselves to online platforms since Johnny Knoxville first tried to do a backflip nude off an 18-wheeler. And if you think that the rapid circulation of videos on TikTok is a new phenomenon, the Star Wars kid, who circulated on peer-to-peer networks like Kazaa and was hosted on sites like Newgrounds, would like to argue otherwise. We often speak of the importance of not understating the impact and influence of internet platforms, but have we considered the possibility that, as a corrective, we end up overstating them?

Algorithms and platforms of today play an incredible role in our lives: they shape our psyches and behavior, often invisibly, with side effects that range from harmless to insidious. TikTok, whose parent company, ByteDance, is based in China and is therefore beholden to its governments censorship regime, has used its algorithm to suppress content documenting the protests in Hong Kong. Its important to remember that our feeds are political, even and perhaps especially when they espouse no politics at all.

But also consider, despite the breathless press, that TikTok is not especially unique. It is just the latest version of a user experience formulated to increase screen time that preys upon the addictive centers of the brain. But because its a new social network that is distinct enough to take some time understanding and getting used to, its essential sameness is obscured to us adults, who end up focusing on its aesthetic differences and the fact that kids today cant have a conversation without their faces in their phones. This is not unlike how we fret about vaping being an unprecedented threat to teen wellness, even though teens have always sought out sketchy intoxicants and other things that endanger their lives, or how we marvel at their expectation of praise, even after years of having to defend our own participation trophies

One could argue that the stock a cultural commentator places on any set of broad signifiers corresponds with how far removed that cultural commentator is from the group theyre discussing. There is literally no way that a teen believes that making a TikTok video is the expression of generational identity that certain Olds seem to believe it is. Theyre just doing the things that teens have always done, but with different tools.

Millennials, the generation that has been synonymous with avocado toast and self-obsession to which I belong, were constantly accused of and berated for killing whichever industry or antiquated practice about which their elders wanted to reminisce at any given time. Remember the printing press? The 8-track? The broom? Whereas Millennials were endlessly criticized for our hand in accelerating relationships to technology, Zoomers, i.e., anyone born between 1997 and 2012, are commended for their creative uses of everything from TikTok to Google Docs. Theyre lauded as the generation responsible for escalating talks of real-world structural change, both environmentally and culturally; when Millennials tried talking about this stuff, we were often scoffed at and given sarcastic labels such as snowflakes and social justice warriors. Maybe it is just Millennials turn at this, as every generation bemoans and mediates the previous generation's entitlement and ease. Some have tried to combat the judgmental wave and set a new precedent, which only confirms the pattern's power and relevance. Are we doomed to have killed for tappa-tappa-tappa?

When I think about what makes TikTok unique, one phrase comes to mind: it is a den of lies. One of the most popular formats on the app involves the initial set-up for a miniature storyline: the creator fills us in on some betrayal without offering a resolution, or makes some dubious assertion, or simply engages in random scandal-baiting. Sometimes in the caption, sometimes in the video itself, is some sort of promise to fill us in on the entire story in a follow-up post, but only on the condition that a certain predetermined level of engagement is met on the original post. (Imagine that rather than immediately gifting this video of a cockroach dragging a cigarette butt across a grate to the world, its creator first posted a front-facing camera video in which they promised to show us a crazy thing that a cockroach was doing, but only if their original clip got 100,000 views.)

Comments claiming the creator is clout chasing, often (deservedly) accumulate, and depending on the format, the video may be called out as being staged or fake. As engagement swells, the second video either does or doesnt come. Even if a follow-up does end up coming, its generally anticlimactic and never lives up to the hype. Even the creator is over it, having racked up all the engagement, achieved the dopamine high, and often can't even believably pretend the reveal was worth waiting for. Perhaps the peak of excitement, for creator and viewer alike, rests in the promise of skyrocketing likes and comments, the anticipation for the titillating cliffhanger. This, too, is nothing that tabloids didn't already figure out decades ago, a discovery which was then recycled in the clickbait boom of just a few years ago. There is a question of whether content is truly engaging or just pantomiming through the motions. But does it matter if the effects are the same?

Perhaps TikTok actually is a microcosm for our cultural climate, but slightly off-kilter. Performance studies would have us believe that our every action is both a communication and a manipulation. On TikTok, we experience human communication from the metal box in our hands, watching performed reflections of human life, fabricated events, and everything in between. Its not novel to point out the perils of attempting to extrapolate conclusions from mediated versions of something rather than the thing itself, but the possible difference between pre- and post-online life is that we really ought to know better. TikTok, like many platforms, is less indicative of social trends than it is of the priorities of its algorithm, which could drive even the most guarded content creator to desperately overexpose themselves.

The question of whether it is still (or was ever) helpful to sum up two decades of people in sweeping statements is now debatable, and even moreso with the exponential rate of technological innovation, and thus communication and performance. Why are we still relying on this dynamic of generational analysis? Isnt it about time to move on?

If we must discuss cultural shifts by generation, let us step back and analyze the greater forces shaping such transformations. The truths will become obvious. Did Millennials decide to not buy houses because we like to keep our options open and/or spend all or money on lattes, or because we came of age in a precarious economy whose recovery has yielded a prohibitively expensive housing market? This is not so different from the notion that todays teens might not actually be gravitating to TikTok as much as theyre settling for something that approximates the energy and chaos of the offline social lives they cannot fully lead because the infrastructure of real-life community has eroded. If we are to learn anything from each other, let us learn it from each other, and not by peeking out from whatever internet cave into which weve been shuffled.

Read more:

The teens don't want to love TikTok - The Outline

Sister Wives: Maddie Brush Loves Taking Pics Of Her Kids Is Her New Camera Any Good? – TV Shows Ace

Sister Wives fans know that Maddie Brown Brush recently gave birth to her daughter Evie-K. From the get-go, shes shared cute pictures of her little one and elder brother Axel interacting with her. But, she wants to take more photos, to the point that she went off and got herself a new camera. Is it a good buy for Maddie? Lets find out.

Reality stars sometimes opt to leave the kids out of their social media. Ashley Martson of TLCs 90 Day Fiance comes to mind. But for other reality stars, its not easy to keep them out the limelight as their kids make up a large part of the show. Think Teen Mom, Little People, Big World, and Unexpected. For Maddie, who grew up in a family show and is no stranger to cameras, theres no problem with sharing pics of her kids. Fans get spoiled by Maddie, who cheerfully shares photos.

Taking to her Instagram on Monday, Maddie shared a new photo of Evie K. She captioned it with, Got myself a new mom camera Obviously I am having wayyyyy too much fun. She sure is cute though! And, the photo of Kody Browns granddaughters very cute indeed. The newest addition to the Sister Wives family wears a bright yellow bow on her head, and her big blue eyes stare at the camera. Naturally, fans wanted to know what type of camera she bought.

These days not everyone bothers with lugging around a camera. After all, most decent phones come with quite good cameras built in. It makes for ease when uploading to Instagram, and no special knowledge about lenses and lighting is required. But for those who wish to up their art with photos, a separate cameras not a bad idea. In answer to questions, the Sister Wives star said she bought a Nikon D3500, adding that she thinks its WORTH IT! The D3500 is a step up from the older D3400. The camera first got produced in 2018, and in 2019 it won the TIPA Best DSLR title.

Digital Camera World reviewed it as, The D3500 isnt just Nikons cheapest and simplest DSLR, its also its lightest, weighing just 415g, body only, and thats with the battery and a memory card. It will usually come with a lightweight 18-55mm AF-P kit lens which has a retracting mechanism to make it more portable when its not switched on. Mind you when they say its the cheapest, potential buyers should note that the price ranges from $499.95 to $849.95, depending on which lens kit you prefer. They also suggest that no smartphone can yet match the quality of a good DSLR. For nearly $1,000 you could get their top phone with an inbuilt camera the Apple iPhone 11 Pro.

The Sister Wives star chose an entry-level DSLR camera. The buttons and dials are easy to understand, and theres even a guide mode for newbies. However, PC Magnotes the drawback. They wrote, Theres no Wi-Fi, but the D3500 does have Bluetooth for wireless file transfer. It can transfer 2MP JPG images to your Android or iOS device using the Nikon SnapBridge app, a free download for either platform. So, youd need to take an extra step when it comes to sharing your pic online.

But, if Maddies looking to learn more about photography, the cameras probably a good choice for her. Do you use a DSLR? Or, do you prefer to just snap a pic using your smartphone? Sound off your thoughts in the comments below.

Remember to check back with TV Shows Ace often for more news about the cast of TLCs Sister Wives.

Woryn is a writer who started a small book publishing company in New Zealand. He wrote three books, one of them published by Domhan. Woryn prefers to write about real things, rather than fiction which is why he likes to focus on Reality TV shows.

Follow this link:

Sister Wives: Maddie Brush Loves Taking Pics Of Her Kids Is Her New Camera Any Good? - TV Shows Ace

Megan Barton-Hanson shares shady post after ‘splitting’ from girlfriend Chelcee Grimes – Heat World

by Nathan Katnoria | 20 10 2019

Love Island and Celebs Go Dating star Megan Barton-Hanson has shared a series of cryptic quotes on Instagram that appear to throw shade at ex-girlfriend Chelcee Grimes following reports theyve split.

Megan began dating Chelcee last month after her relationship with TOWIE star Demi Sims fizzled out and even starred in the pop stars music video for Time To Talk, but its thought the pair struggled to make things work with their busy schedules.

They got on really well and enjoyed a brief romance, but decided to call it a day earlier this week. Both were a bit upset about it, but remain pals, a source told The Sun.

Chelcee is throwing herself into her music projects and concentrating on recording new tracks at the moment. She is working on a music video so that is taking her mind off her love life.

1/18

2/18

After her split from fellow Love Island star Marcel Somervillle, CBB star Gabby Allen announced she is now dating X Factor winner Myles Stephenson.

3/18

Prince Harry and Suits actress Meghan Markle began dating in October 2016 and married in May 2018. Their first child is due in Spring 2019.

4/18

After splitting from his wife in early 2018, EVERYBODY was left in shock when reports suggested Channing Tatum is dating pop star Jessie J.

5/18

Love Island host Caroline Flack announced she was dating former star of The Apprentice and Celebrity Big Brother contestant Andrew Brady by uploading a picture of the pair kissing to her social media accounts.

At the end of April 2018, the couple announced their engagement but later announced they'd split up in July 2018.

6/18

Nobody saw a romance between singers Selena Gomez and The Weeknd coming! But they went strong until October 2017 when the pair reportedly split up.

7/18

Cheryl and One Direction singer Liam Payne shocked the world when they announced they were dating back in February 2016. Despite a ten year age gap, the couple welcomed their first child, Bear Grey Payne, back in March 2017. However, in July 2018, Cheryl and Liam released a joint statement revealing they'd split.

8/18

Katy Perry and Russell Brand married in 2010, much to the surprise of fans. Their marriage was short-lived, leading to a divorce in 2012.

9/18

Taylor Swift and Tom Hiddleston were 2016's most talked about romance! Unfortunately, the couple went their separate ways a year later.

10/18

Rob Kardashian started dating Blac Chyna in 2016, despite her previously dating Rob's sister's ex-boyfriend Tyga. They welcomed a baby girl together called Dream and later ended on very bad terms!

11/18

Fans didn't expect Vogue Williams and Spencer Matthews to become a couple! The pair met when they were filming for Channel 4 show The Jump in 2017, got married in June 2018, and welcomed son Theodore in September 2018.

12/18

Sacha Baron Cohen and Isla Fisher married in 2010 after a six year engagement. Nobody saw this marriage coming with the unusual movie roles Sacha plays!

13/18

Love Island star Jonny Mitchell and Made In Chelsea's Stephanie Pratt left fans in shock when they confirmed their romance with this picture in September 2017. Sadly, the couple have split in December 2017.

14/18

Actress Cameron Diaz married Benji Madden just two months after she met him, despite many not seeing their romance coming.

15/18

After starring in TV show The Simple Life with her best friend Paris Hilton, Nicole Richie turned a lot of heads when she married Good Charlotte singer Joel Madden in 2010! They share children Harlow and Sparrow.

16/18

After his split from Gwyneth Paltrow, Coldplay singer Chris Martin was spotted getting cosy with The Hunger Games actress Jennifer Lawrence. Despite not actually confirming a romance, the pair reportedly called things off in 2016.

17/18

Fans of actor Aaron Johnson were left in shock when he married Fifty Shades Of Grey director Sam Taylor. Despite Sam being 24 years older than Aaron, the pair combined their surnames when they got married in 2012. They now have two daughters together, Wylda and Romy.

18/18

Olsen twin Mary-Kate married French banker Olivier Sarkozy in 2015, despite a 17-year age gap between the pair.

Slide 2 of 19

Swipe through to see all the surprising celebrity couples we never saw coming...

Gabby Allen and Myles Stephenson

After her split from fellow Love Island star Marcel Somervillle, CBB star Gabby Allen announced she is now dating X Factor winner Myles Stephenson.

Prince Harry and Meghan Markle

Prince Harry and Suits actress Meghan Markle began dating in October 2016 and married in May 2018. Their first child is due in Spring 2019.

Channing Tatum and Jessie J

After splitting from his wife in early 2018, EVERYBODY was left in shock when reports suggested Channing Tatum is dating pop star Jessie J.

Instagram/ Caroline Flack

Andrew Brady and Caroline Flack

Love Island host Caroline Flack announced she was dating former star of The Apprentice and Celebrity Big Brother contestant Andrew Brady by uploading a picture of the pair kissing to her social media accounts.

At the end of April 2018, the couple announced their engagement but later announced they'd split up in July 2018.

Selena Gomez and The Weeknd

Nobody saw a romance between singers Selena Gomez and The Weeknd coming! But they went strong until October 2017 when the pair reportedly split up.

Cheryl and Liam Payne

Cheryl and One Direction singer Liam Payne shocked the world when they announced they were dating back in February 2016. Despite a ten year age gap, the couple welcomed their first child, Bear Grey Payne, back in March 2017. However, in July 2018, Cheryl and Liam released a joint statement revealing they'd split.

Katy Perry and Russell Brand

Katy Perry and Russell Brand married in 2010, much to the surprise of fans. Their marriage was short-lived, leading to a divorce in 2012.

Taylor Swift and Tom Hiddleston

Taylor Swift and Tom Hiddleston were 2016's most talked about romance! Unfortunately, the couple went their separate ways a year later.

Rob Kardashian and Blac Chyna

Rob Kardashian started dating Blac Chyna in 2016, despite her previously dating Rob's sister's ex-boyfriend Tyga. They welcomed a baby girl together called Dream and later ended on very bad terms!

Vogue Williams and Spencer Matthews

Fans didn't expect Vogue Williams and Spencer Matthews to become a couple! The pair met when they were filming for Channel 4 show The Jump in 2017, got married in June 2018, and welcomed son Theodore in September 2018.

Sacha Baron Cohen and Isla Fisher

Sacha Baron Cohen and Isla Fisher married in 2010 after a six year engagement. Nobody saw this marriage coming with the unusual movie roles Sacha plays!

Stephanie Pratt and Jonny Mitchell

Love Island star Jonny Mitchell and Made In Chelsea's Stephanie Pratt left fans in shock when they confirmed their romance with this picture in September 2017. Sadly, the couple have split in December 2017.

Benji Madden and Cameron Diaz

Actress Cameron Diaz married Benji Madden just two months after she met him, despite many not seeing their romance coming.

Nicole Richie and Joel madden

After starring in TV show The Simple Life with her best friend Paris Hilton, Nicole Richie turned a lot of heads when she married Good Charlotte singer Joel Madden in 2010! They share children Harlow and Sparrow.

Chris Martin and Jennifer Lawrence

After his split from Gwyneth Paltrow, Coldplay singer Chris Martin was spotted getting cosy with The Hunger Games actress Jennifer Lawrence. Despite not actually confirming a romance, the pair reportedly called things off in 2016.

Sam and Aaron Taylor-Johnson

Fans of actor Aaron Johnson were left in shock when he married Fifty Shades Of Grey director Sam Taylor. Despite Sam being 24 years older than Aaron, the pair combined their surnames when they got married in 2012. They now have two daughters together, Wylda and Romy.

Mary-Kate Olsen and Olivier Sarkozy

Olsen twin Mary-Kate married French banker Olivier Sarkozy in 2015, despite a 17-year age gap between the pair.

View Gallery18 photos

1 / 18

Neither Megan or Chelcee have confirmed their split but the ex-Love Islander appeared to throw a healthy dose of shade at her former flame in a cryptic post on her Instagram story that hinted at the split.

25-year-old Megan first shared a quote which read: 2020 is in 3 months: were having good sex with people who deserve us, staying hydrated, leaving behind toxic relationships, communicating our feelings, not letting anyone waste our time, and spoiling ourselves with love!

She then followed it up with another quote, adding: Reminder: relationships are supposed to make you feel good.

Chelcee also seemed to suggest her and Meg had hit the skids by sharing a poem which included the line: Instead of standing by my side, you tricked me into conceding. Into dimming my own light, feeling guilty for succeeding.

Malin Andersson bravely shares photo of bruises in domestic violence awareness post

TOWIE's Chloe Lewis posts adorable first picture of herself with newborn son

Liam Payne goes WEEKS without seeing son Bear and admits 'it hurts'

Megan and Chelcees reported split comes just weeks after the reality stars brutal break-up with Chloe Sims younger sister Demi, who claimed Meg used her and broke her heart.

Demi revealed Megan, who came out as bisexual earlier this year, dumped her just one day after they finished filming Celebs Go Dating and even suggested that Chelcee had asked her on a date before getting with Megan.

She told The Sun: This girl DM'd Megan out the blue. Now she's with her. She had been liking her photos. They both hurt me by going behind my back.

I have known the girl for years. She's one of my best mate's ex-girlfriends. She asked me to go for a drink two weeks before she started messaging Megan."

See the rest here:

Megan Barton-Hanson shares shady post after 'splitting' from girlfriend Chelcee Grimes - Heat World

The Power of TuneCore: Kevin Cornell Interview – DJBooth

Photo Credit: TuneCore

Making music is one thing, but how do you get that music out to prospective fans? Enter TuneCore: a distribution service existing at the intersection of artist education and empowerment. The TuneCore platform acts as a pillar in myriad indieartists careers. Artists like Taylor Bennett and Witt Lowry have used TuneCore to get their music to the masses, fund their creative pursuits, and help them maintain ownership over their careers. If youre an independent artist looking to stay indie, TuneCore is one of the best platforms on the market.

When we found TuneCore, we instantly knew that it was the application we needed to further my music career, Taylor Bennett told us earlier this year. I distributed the project with TuneCore, and on iTunes, it was one of the top five projects. It was next to Drake. For me, as an independent artist, to be able tosee my music next to Drakes music... thats when I fell in love with TuneCore.

So, we know TuneCore works for artists, but how does the platform put in that work? Who is working behind the scenes to make sure artists are getting their dues for their music? To answer these questions, and more, we spoke with Kevin Cornell, TuneCores Content Editor. In talking with Cornell, we get a greater sense of how TuneCore exists to empower and educate artistsand make creatives dreams come true.

Our conversation, lightly edited for content and clarity, follows below.

DJBooth:Whats your favorite part of working at TuneCore?

Kevin Cornell: Ive been here for five years now. The companys just been growing and going positively in the right direction. I would say we hire good people, smart people, tech people, music people. People get along here for a reason. I dont think Ive ever worked somewhere where people are good in and out of work. The common goal is to help artists. The combination of those two things: Being able to support artists as a company, and being able to do it with smart, forward-thinking people, is big.

What would you say is the basic TuneCore value statement?

We benefit from being trusted by independent artists. Independent artists having the ability to get their music online without a label, or a more traditional distributor, has only been a thing for so long. We know that when we make changes or communicate with artists, were building educational content with the artist in mind. They know how to use our platformand why they use our platform. But being able to offer them more is important. Being able to get artists music out to stores, out to streaming platforms, and making sure theyre getting paid 100 percent of what they earn is a big part of that value statement.

Tell me about the more of TuneCore.

Aside from just doing distribution, I have to harp on the fact that we offer music publishing administration.With our YouTube Sound Recording service, were using Content ID to collect artists revenue whenever their songs are being used across YouTube.So if your song is being used across YouTube, youre getting paid for that. The more is also the support level. We have an office here in Brooklyn, full of peoplereal peoplehere to help you out. In the entertainment relations capacity, we have boots on the ground [in] Atlanta, New Orleans, Austin, and Nashville. We have people who are dedicated to helping artists in those territories grow and utilize TuneCore better. But its beyond that.Our Entertainment Relations team members are holding networking/educational events and artist consultations, giving artists a chance to network with each other and people in the industry.

So [artists] may come to learn more about TuneCore, but they may also come to learn more about radio promotions, PR, and publishing. Stuff that is so much more in the hands of the artists these days. Its all about connecting the dots and providing a platform for them to learn.You access a lot of great information online, and theres so much an artist can teach themselves these days, but its overwhelming. So were looking to be a thought-leader with our Artist Advice hub by providing blog articles, Survival Guides, podcast episodes, and more.Its not easy trying to reach a huge artist base across genres and levels of success, but were trying to hit on all those notes and provideas much real-world advice and educationthat isnt behind a paywall. You dont even have to be distributing music yet to get access to it!

Tunecoreheadquarters in Brooklyn, NY.

Talk to me about the empowerment component of TuneCore, the boots-on-the-ground aspect of it.

Our Entertainment Relations team is ready to offer support to artists, managers, and small labels that use TuneCore. Theyre on the ground to be a resource for people. Its giving artists access to information in one central location or throughoutone experience using TuneCore.

When youre using TuneCore, its more than uploading your music. Youre taking control of your distribution and publishing administration, which is stuff you couldnt do [before]. Its underrated how much control you have. The ability to plan. The ability to put a release plan together and build out campaigns on your own This is, to me, what empowerment means. You can market your music across all platforms. You can find out where your fans are in the world and what platforms theyre using.

With TuneCores distribution to over 100 countries worldwide, your releases can truly have a global reach, and thats a huge door-opener for a lot of artists. You might be based in Pennsylvania but have a huge following in South Korea. You might be based in London, but find you have a huge following in South Africa. Thats the coolest. You can get your music everywhere and see where people are listening, and that can impact your tour selection.

For artists looking to use TuneCore, do you have any quick best practices, so they get the most out of the platform?

Number one: Planning and timing. TuneCore can get your music onto platforms pretty quickly, but if youve spent all this time thinking about this project and you put off the actual delivery of it to the last minute? You set yourself up for potential issues.Too many times Ive seen artists do this and say, Oh Im just gonna upload it 72 hours beforehand. No, no! That works for some artists, obviously, but just like anything else, newer artists should be treating this like anything else when theyre managing their music career: with careful consideration and preparation. Youre not going to start your PR campaign a week before the release. You need lead time. Treat your release like a label would. Its the best practice to get into when youre managing your career.

Every release is an opportunity to learn. What did your stats look like? This can influence the way you market your music on social. If youre gonna look at your Facebook post engagement, you shouldlook at whats going on with your last release.Going back to the global reach that TuneCore provides artists, you can start experimenting with marketing to new territories.Why not start looking at those London blogs? Everyone harps on the fact theres never been a better time to be an indie artist. I dont disagree with that at all, but we overlook the time, research, and planning that goes into a good release campaign.TuneCore offers a suite of Artist Services that can help with that.

Whats the best part of being the facilitator for artists dreams?

That is such a heavy title! People are genuinely appreciative of what we do, and thats awesome. Im an indie music fan, living and working in Brooklyn. If I were an accountant at Yelp, Id still be listening to indie music and going to shows. To me, this is an opportunity to take what I care about outside of workand think about it at work.

Link:

The Power of TuneCore: Kevin Cornell Interview - DJBooth

Technology exposed Syrian war crimes over and over. Was it for nothing? – MIT Technology Review

On April 23, 2014, Houssam Alnahhas slid into the back seat of a car in the southern Turkish city of Gaziantep and headed for the Syrian border, about 30 miles (48 kilometers) away. A tall 26-year-old medical student with striking gray eyes, he had escaped Syria two years before and was working for a task force training medical personnel in opposition-held areas. But now he was heading back with a mission: to collect evidence of war crimes.

Two weeks earlier, Alnahhas had started receiving reports that barrel bombs were being dropped on towns in the countrys rural northwest. He was used to such news in his work, but this time was different. Usually the crude devices were packed with explosives and shrapnel. But doctors were telling him these latest bombs were releasing poisonous clouds of chlorine gas.

Chlorine gas had rarely been used as a weapon since World War I, and its use in Syria would be a major violation of international norms. Western governments wanted to know if there was proof. And so, over the next two days, he and two of his friends visited two villages that had allegedly been hitKafr Zita and Talmenesto see what had taken place.

The trip was dangerous. They were close to the front lines of the civil war, where rocket, mortar, and sniper fire were common. If agents of the Syrian regime got word of what they were doing, their lives would be in peril: Alnahhas had heard rumors that someone whod collected evidence from a chemical attack a year earlier had been assassinated while attempting to bring it to Turkey.

But the threat of violence wasnt the only thing weighing on his mind. Alnahhas knew that many groupssupporters of Syrian president Bashar al-Assad, the Russian and Iranian governments, online conspiracy theoristswould use any opportunity to insist that chemical--weapons attacks were false-flag operations or outright hoaxes. And since he was acting on his own, without institutional backing, he wanted to make sure the evidence he collected was unimpeachable.

As soon as he crossed the border, Alnahhas started tracking his coordinates using GPS and recording the trip on video. In the two villages, residents described witnessing yellowish-orange smoke rising after helicopters dropped barrel bombs. Doctors explained how they treated victimswomen, men, young and elderly peoplewho were terrified, coughing violently and struggling to breathe. They handed over blood, urine, saliva, and hair samples they had collected.

There were dead birds scattered across the ground, and the leaves on the plants and trees were dead, even though it was springtime. The smell of chlorine still hung in the air, causing him to cough and his eyes to water.

At the spots where the bombs had fallen, Alnahhas recorded 360-degree video of the surroundings, focusing on identifiable landmarks so the locations could be independently verified. He collected soil samples in small plastic containers, triple-sealing them in clear plastic bags and labeling them in front of the camera.

In Kafr Zita, he gathered pieces of shrapnel and measured heavy, rusted barrels bent, mangled, and peeled apart by the impact and detonation. There were three long canisters, two still lodged inside the barrels, covered in chipped yellow paint, the color often used to mark industrial chlorine gas. The chemical symbol Cl2 was still clearly visible on the ruptured nose of one.

In Talmenes, in the dimming evening light, Alnahhas filmed an impact crater in the backyard of a house. There were dead birds scattered across the ground, and the leaves on the plants and trees were dead, even though it was springtime. The smell of chlorine still hung in the air, causing him to cough and his eyes to water.

To be honest, Alnahhas says, this was the scariest time of my life.

Syria was one of the first major conflicts of the social-media era. Local access to Facebook had been restricted since 2007 as the government tried to limit online political activism. But by February 2011, when the Assad regime unblocked many social-media siteseither as a nod toward reform or as a way to track its opponentsthey had become major forces across the globe, and many Syrians had cell phones with cameras and access to high-speed internet.

Soon afterwards, protests broke out in the south of the country and quickly spread. The government cracked down brutally, and activists, lawyers, medical workers, and ordinary citizens started using Facebook and YouTubeoften at great personal riskto record the violence and show it to the world.

Initial efforts were haphazard, and mostly involved people uploading shaky cell-phone video and using accounts with fake names to protect themselves. But before long the push to document what was happening became more organized and sophisticated. Media offices and local news agencies mushroomed. By early 2012, international organizations had begun training local activists on professional production standards and online security and helping them to record their videos. The idea wasnt just to release clips to the media, but to gather evidence that could be used to pursue justice in the future.

Volunteers took videos and photos at the scenes of attacks and potential war crimes, compiled detailed medical reports, recorded victim and witness statements, and smuggled reams of documents out of captured government buildings. Civil society groups such as the Syrian Archive and the Syria Justice and Accountability Centre collected millions of pieces of potential evidencesome of it made public, some filed away in protected archives.

The material collected by Syrians allowed people far away from the actual fighting to take part in the investigative efforts too. In 2012 Eliot Higgins, then an unemployed British blogger, began sifting through videos and photos posted from Syria, trying to identify the weapons being used; later he started a website, Bellingcat, and assembled a team of volunteer analysts.

Pioneering the technique of open-source investigation, Higgins and his team pieced together evidence suggesting that Syrian government forces were using chemical weapons and cluster bombs, that Russian forces had attacked hospitals in the country, and that ISIS was using small, commercially available drones to drop 40mm grenades onto targets.

Sign up for The Download your daily dose of what's up in emerging technology

Back then, many people working at the intersection of technology and human rights shared a belief in the power of social media and digital connectivity to do good, according to Jay D. Aronson, head of the Center for Human Rights Science at Carnegie Mellon University. People thought that if were able to document these war crimes and these human rights violations and were able to share them with the world, then that will create political will that will lead countries to intervene and protect vulnerable populations, he says.

Spurred on by such optimism and the encouragement of Western politicians, such efforts made the Syrian conflict the most thoroughly documented in human history.

Thanks to frontline investigators like Houssam Alnahhas, local outfits like the Syrian Archive, and online analysts from Bellingcat, detailed information about what was happening on the ground was there. Someone just needed to act on it.

When Alnahhas returned to Turkey with the evidence hed collected in Kafr Zita and Talmenes, he met up with a British chemical weapons expert who tested some of the samples. The analysis confirmed that they contained a high enough concentration of chlorine to kill people. The evidence clearly showed that the Syrian government, the only fighting force with helicopters at the time, had indiscriminately bombed civilians with chlorine gasa war crime.

International media picked up on the story; human rights organizations published reports; the Organisation for the Prohibition of Chemical Weapons launched a fact-finding mission. The remaining samples were given to the Western governments who were interested, and then Alnahhas waited.

Nothing.

Last summer I met Ahmad al-Mohammad, a soft-spoken activist and communications director of the Syrian Institute for Justice, in Istanbul. He had been a 19-year-old agriculture student at Aleppo University when the uprising began in 2011.

The Syrian protesters were optimistic back then. The US had just led an international military intervention to protect civilians in Libya from the advancing army of former leader Muammar Qaddafi. We listened to a lot of speeches from the president of America, Obama, said al--Mohammad. We had hope, honestly, that the West would intervene and remove Bashar al-Assad.

And in 2012, Obama declared the use of chemical weapons in Syria a red line. The world is watching, he warned Assad. If you make the tragic mistake of using [chemical] weapons, there will be consequences, and you will be held accountable.

Obamas resolve was put to the test on the morning of August 21, 2013. Syrian government forces launched rockets loaded with sarin gas, a deadly nerve agent, at the rebel-held enclave of Ghouta, on the outskirts of Damascus. It was by far the deadliest and most visible chemical attack of the war. Syrian activists quickly uploaded photos and videos of the casualties, many of them women and children, their faces blue from suffocation. The estimated death toll ranged from around 350 to more than 1,400.

Emily Haasch

The USdriven forward by the red line rhetoricprepared to launch military strikes. The regime hunkered down. But at the last minute, Obama pulled back. Instead of using force, he opted for a deal brokered by Russia, which resulted in the Syrian governments signing on to the Chemical Weapons Convention and agreeing to declare its stockpiles and destroy them by mid-2014.

For people in opposition-held areas, the decision was crushing. We lost hope that anyone would [stand] up and say enough killing civilians inside Syria, Mohammed Abdullah, a Syrian photographer who goes by Artino and who was in Eastern Ghouta at the time of the attack, told me.

And then, despite its promise to dismantle its chemical weapons program, the Syrian government launched chlorine gas strikes in April 2014the ones Alnahhas documented. They were another clear violation of Obamas red line. When the outside world again failed to take strong action, Assads government continued to push the envelope. According to a report by the Global Public Policy Institute (GPPi), a think tank in Berlin, this was when the Syrian government began integrating the use of chemical weapons, especially chlorine gas, into its arsenal of indiscriminate violence.

Assads strategy was directed against civilians living in opposition-held residential areas far from the front lines. Life-sustaining social institutionsbakeries, hospitals, and marketswere often targeted with a brutality that forced people to choose between surrender, exile, and death. Tobias Schneider, one of the GPPi reports authors, refers to it as the military utility of crimes against humanity. The use of chemical weapons was the last couple of meters, he told me.

Heavier than air, poison gas sinks into basements and bunkers, suffocating and terrifying people sheltering from conventional bombs and weapons. Even if the chemical attacks often didnt kill large numbers of people, they showed that there is absolutely nowhere you can hide and theres absolutely nothing [the regime] can do that will make the international community stop [the violence], Schneider added.

The Syrian government has used chemical weapons more than 330 times so far, according to data collected by GPPi. The vast majority of these incidentsmore than 300 of themtook place after the attacks in Ghouta, Kafr Zita, and Talmenes.

For Alnahhas, the lesson was clear.

After providing evidence all the time, at a certain point you stop to believe that it will be effective, he said. The main thing that I know is that neither I nor the people inside Syria trusted the international community anymore.

Many people who had been documenting the war were forced to leave Syria as it grew more violent. Some decided to focus on putting their lives back together, to finish their studies or start families. For many of those who remained in Syria, the work of documentation became too dangerous as the areas they were in fell under regime control.

But other activists have decided to take a longer-term view. Although the documentation efforts have failed to shift the course of the war, Syria has produced arguably the largest evidence base on war crimes ever recorded. Civil society organizations are sifting through the data, organizing it, and using it to build case files for prosecutions. Courts in Germany, France, and Sweden are already trying cases. Arrest warrants have been issued for several high-ranking members of the Assad regime, and charges have been brought against European companies for violating sanctions imposed on the Syrian government. The Open Society Justice Initiative (OSJI), a human rights litigation team, is working with the Syrian Archive to develop case files on a number of attacks, including the attack in Talmenes that Alnahhas documented.

Al-Mohammad has scars on his face from having his jaw fractured in seven places when security forces threw him from the second story of a building during a protest in 2012.

Open-source information has radically transformed how we investigate, collect, and analyze information, Steve Kostas, a lawyer with OSJI, told me by email. We use it to establish a factual narrative of the attacks, to identify possible witnesses, [and] to identify and learn about suspected perpetrators. Still, says Beth Van Schaack, a visiting professor at Stanford Law School who previously worked on Syria at the US State Department, the prosecutions so far have been mostly against lower-level individuals, opposition figures, [ISIS] members, and not the kinds of war crimes that have really come to characterize this war.

Holding the true architects of the Syrian governments war strategy responsible would require unity from other governments. But Russia has repeatedly blocked efforts to start an international process of justice and accountability; for example, it vetoed a 2014 UN Security Council resolution referring Syria to the International Criminal Court. The UN created a body called the International Impartial and Independent Mechanism to gather evidence for future cases, but until this moment, we dont have any court or entity that has jurisdiction over crimes committed in Syria, says Deyaa Alrwishdi, a Syrian lawyer who has been involved in accountability efforts since 2011.

It now seems all but inevitable that the Assad regimehelped by Russia and Iranwill emerge victorious from the war. It may be decades, if ever, before its truly held accountable.

We get hope when we look at the former Yugoslavia and how victims and survivors from Bosnia and Herzegovina did eventually get justice. That gives us hope to keep holding on, al-Mohammad, from the Syrian Institute for Justice, told me in Istanbul.

He has scars on his face from having his jaw fractured in seven places when security forces threw him from the second story of a building during a protest in 2012. Two members of the documentation team he manages in Syria were killed while carrying out their work. And he has watched countless hours of video showing one brutal atrocity after another, giving him nightmares. His family is still in Syria, and he worries that they will be punished by the regime as retribution for his actions.

Emily Haasch; illustration source imagery courtesy of the author

Its hard for him to see a path forward or a way to return home. Me and my friends, we sit down and we talk about it a lot we dont really know where we are going, he says. At the end of the day, people like usour future in a Syria without justice is just death or prison.

Yet al-Mohammad and others have continued to record evidence of the crimes taking place. At some point, he says, it stopped being about what the international community would or wouldnt do; it became about Syrian people taking control of their own stories. My goal became to document my countrys history, he says.

When I met Alnahhas in Gaziantep earlier this summer, he told me he felt the same way. We talked at an outdoor caf, surrounded by the mundane bustle of a busy town. Syria, just a few miles down the road, seemed far away. In the years since his dangerous trip to document the chemical weapons attacks, he had gone to a Turkish university to finish his medical degree, married, and started a family. He couldnt imagine returning home.

He told me about three of his friends, young students who had volunteered to provide care to injured protesters in the early days of the uprising. They were stopped at a regime checkpoint, and medical supplies were found in their car. Days later, their bodies were returned to their families, burned beyond recognition. Years later, his efforts to document the chemical attacks in Kafr Zita and Talmenes hadnt changed anything; people were still being murdered with impunity.

At the same time, he said, you cannot simply say that Ill not continue. If nothing else, documenting has given him and others like him a certain mission. History is written by the strongest, he said, echoing the familiar adage. Without proper evidence the regime will be able to, at a certain point, say No, this never happened; [it] will be able to manipulate the history of the Syrian crisis maybe to avoid punishment. So this is our responsibility.

Eric Reidy is a journalist based in the Middle East.

Follow this link:

Technology exposed Syrian war crimes over and over. Was it for nothing? - MIT Technology Review

Lost albums of the 2010s what became of the albums we were promised but that never arrived? – NME Live

As the decade heads to a close, we remember some of the albums that were promised, scrapped or simply havent arrived over the last ten years

While everyones brains are firmly fixed on the best albums of the 2010s (NMEs definitive list is coming in November) its worth taking a moment to remember those that we never got to hear.

Whether thats full albums that were scrapped in favour of the band or artist starting over, records leaked without the artists consent and never to be heard in their full and finished form, albums that have been promised for years now but still havent been released, or, in one case, a record literally being destroyed, there are some notable omissions from the best-of lists.

Weve dug into the lost albums of the 2010s, why they were never released, and whether we can hope to hear them at some point in the future.

Planned release date: September 29, 2018What happened? Two days before the planned release of Yandhi, Kanye played music from the album featuring XXXTentacion, 6ix9ine and more to The FADER at their headquarters. After a leaked tracklisting came out, the album was delayed until that November. The same month, West tweeted that Yandhi wasnt actually finished and that hed announce the release date once its done.Will we ever hear it? With Kanye reportedly set to release new album Jesus Is King before the end of 2019, hes got other things on his mind. In September 2019, though, it was revealed that Ye is attempting to trademark the term Yandhi, leading to speculation that the album could indeed be on the way. The saga continues.

Planned release date: 2016/2017What happened?Outside the Crouch End Studios in London in 2016, Ian Brown confirmed to NME that The Stone Roses were recording new music, calling it glorious and saying it would arrive soon. It came three years after Mani said the band were working on a few bits of new music. Over the next few years, the band continued to play reunion shows with no new music surfacing, and in 2017 the bands biographerJohn Robb revealed that plans for the album had broken down, saying:Between the four of them, there was a great third album in them. If they could have just made a record without caring about the pressure of expectation or commercial expectation. If they could have just jammed for 45 minutes, it would have been a great record.Will we ever hear it? With John Squire confirming earlier this year that the band have indeed broken up, it seems very, very unlikely but never say never.

Planned release date: 2018 0r 2019What happened? After waiting 22 years for m b v, the massively anticipated Loveless follow-up, My Bloody Valentine fans werent expecting another new record for a while. However, Kevin Shields told Sound On Sound magazine in April 2018 that the band were working on two EPs, with one released that summer, and a second to follow in spring 2019. He then said in another chat the prospective first EP was being used instead as part of a new album, before scaling up his ambitions again, stating a plan to release two new albums in 2019.Will we ever hear it? Theres a fair chance that the music that the band have been teasing (and also debuted live), will see the light of day, but specifics around what form it might take, or even a vague release date, are pretty much still up in the air.

Planned release date: Unknown (leaked online in April 2013)What happened? After becoming the buzziest name around with 2012 single Jasmine, the elusive Londoner had an albums worth of demos and in-progress material stolen and illegally uploaded to Bandcamp. Paul stayed silent on the issue himself for six years before discussing the situation and officially uploading Leak 04-13 (Bait Ones) online, alongside the release of two new songs. Ive grown to appreciate that people have enjoyed that music and lived with it, and I accept that there is no way to put that shit back in the box, he wrote at the time. Looking back, its sad to think about what could have been, but it is what it is and I had to let go.Will we ever hear it? Not in the way that Jai Paul intended us to, at least. The official release and accompanying message have drawn a line under the recordings, and the singer (presumably) continues to work on what will actually become his debut album.

Planned release date: 2018What happened? In March 2018, Brockhampton announced that fourth studio album Team Effort, announced at the end of 2017 with the release of their Saturation III LP, was delayed indefinitely, and that theyd instead release another album called Puppy in mid-2018. That May, though, troublingallegations were made about member Ameer Vann, who was promptly kicked out of the group, leading to Puppy being delayed. After cancelled tour dates and a period of silence, the band made their comeback on the Jimmy Fallon show sans Vann, playing new song I, Tonya and announcing a new album called iridescence, effectively putting the non-era of Team Effort and Puppy to bed.Will we ever hear it? Releasing iridescence and fifth album Ginger since, theres pretty much zero chance that Team Effort and/or Puppy will ever surface.

Planned release date: 2015-presentWhat happened? Less than two years after her standout debut Night Time, My Time, Ferreira announced that her second album was called Masochism. The name has remained constant until now, though little else has. Through countless promises of new music have come and gone, along with messages discussing various logistical issues as reasoning for the albums delay it was only in summer of 2019 that she released Downhill Lullaby, the first single from Masochism. The album still has no release date, though, and it looks almost certain that well be waiting into the 2020s to hear it.Will we ever hear it? Almost certainly, but after seven years and a great deal of confusion, the question of when we might hear it is as open-ended as ever.

Planned release date: September 2016What happened? After being delayed from an original autumn 2016 date into the following year, a whole album of Charli demos dubbed XCX World by fans and destined for her third studio album was leaked in a similar turn of events to those that afflicted Jai Paul. It led her to start again on what would become LP3.Will we ever hear it? With third album Charli released in September 2019, featuring none of the songs supposedly set for XCX World, the album seems destined to live in demo form forever.

Planned release date: 2017-2018What happened? Speaking to Timeabout 2018 EP My Dear Melancholy, Abel Tesfaye revealed that prior to the EP, he had finished an album that he describes as beautiful and upbeat. Prior to Melancholy, I had a whole album written, done, which wasnt melancholy at all because it was a different time in my life, he said. He went on to explain that the record was shelved because it didnt reflect his feelings at the time, saying: I dont want to perform something that I dont feel.Will we ever hear it? The verdict is pretty conclusive on this one. Asked whether fans could ever hear the record, The Weeknd simply said: Never. Sorry!

Planned release date: 2018What happened? At the end of 2018, Machine Gun Kelly posted a message to fans, celebrating his music receiving half a billion streams. During the message, he said I didnt give you the 4th album and I cant stop thinking about that, revealing that he had a whole new album ready to go before scrapping it and starting from scratch. Saying that hes now got a firm title in mind for the new record, Kelly concluded: This ones from the soul.Will we ever hear it? Maybe one day in the future, but definitely not for the time being.

Planned release date: 2011What happened? Noel decided it was shit, basically. Ahead of his 2011 solo debut album, Gallagher had collaborated with production duo Amorphous Androgynous, and theyd written an album together. However, as Noel explained to NME in 2015, I was in the middle of a tour, that last album had blown up, the mixes werent right. And by the time I got back off tour I was just like, Im not fucking putting out another record, I cant be arsed. I was frazzled and had glandular fever. I was fucked.Will we ever hear it? The Right Stuff and The Mexican, from Noels solo Chasing Yesterday LP, were originally songs from the AA collaboration, but hearing any more will be literally impossible Noel owned the master recording of the album and destroyed it.

The rest is here:

Lost albums of the 2010s what became of the albums we were promised but that never arrived? - NME Live

Greta Kline of Frankie Cosmos on slowing down (to a certain point) – Metro US

The music that songwriter Greta Kline creates inhabits the small moments of life in an abundant way. For years, she has been recording her own brand of bedroom pop under numerous monikers and uploading them online at the same pace that many of us exercise. These days, the creating process has slowed down only a little as she has settled into her most notable persona Frankie Cosmos releasing her second studio album for Sub Pop earlier this month, Close it Quietly.

The album recorded with her longtime band mates, Lauren Martin (synth), Luke Pyenson (drums), and Alex Bailey (bass) finds Kline delivering one of her most focused and immediate selections of songs to date. And as with her past output, this is saying something, as the 21 songs that are included on Close it Quietly hover just around two minutes on average with some clocking in around thirty-to-forty seconds. When she reaches two and a half minutes on the albums closer, This Swirling, it feels like she is reaching prog territory in comparison. The record feels like the work of an artist who has spent years consistently putting in the work. A culmination of constant sharing and experimentation with song craft.

Greta Kline with her band, Frankie Cosmos. Photo: Jackie Lee Young

But with the bands ever-busy touring schedule, Klines output has slowed down to only one or two releases per year as opposed to, say, five with an emphasis on creating the right representation of her creative mind-set at that point in time.

Before we were a real band I was just putting out music every month, says Kline of the process of releasing music at this point in her life. Every time I made a demo I was putting it out. Now it seems like so much less to me. In the past it seemed like I was putting out everything I thought of. Now its like, Ill write ten songs and one of them will make it onto an album.

With someone as prolific as Kline is, the emphasis on chiseling time out to record amidst the recording and touring cycle has put things into perspective. I think my time at home has a different meaning to me now, she explains. Because we tour so much of most years. When Im at home, I really want to be working on something. For us, this past Winter was that. Just being able to record feels different than when youre touring. It just feels like precious time.

With so many ideas being brought to the table, she has found a real partnership with her band, whose contributions to the new album provide the right amounts of impact and pathos when required.

I feel like weve, over the years, developed a really good style of communication with each other. We have more of a streamlined way of communicating. Its always hard because its four people talking about what we should do with a song, Kline explains. Something that I really appreciate about my bandis they know when a song doesnt need to be added to. There are a couple of solo songs on the album where they were like, Yeah, I dont think we can add anything to this. Then when they do have something to add, theyre like, Yeah, maybe we could add this there. That makes me trust them. Theyre not greedy players (laughs).

In a way, being a fan of Klines music brings a sort of reliable constant to your life. As every five or six months or so, you are bound to hear a continuation of her story through a collection of short songs that will catch you up on how she is feeling at that given time. Its like a conversation is resuming after being interrupted. I ask Kline if she views each song cycle in this way.

I dont even think about the collections of songs as theyre going to be an album, Kline says. I think it does function in the way that youre saying where you get all of these snapshots and of course you have more of an understanding on a bigger thing because its a bunch of short things. But I think it could be any bunch of short songs, its just whatever I have. I think theyre connected because theyre from a similar time in my life. I write about the same stuff over and over, so that will also make them a little more similar or connected in some way. I dont necessarily think about the way the songs are working together to represent something, its more that each one is a small moment and if you want to you can piece them together in some deeper understanding of life, or my life, or whatever.

Make sure to catch Frankie Cosmos on tour this Fall.

Read the original post:

Greta Kline of Frankie Cosmos on slowing down (to a certain point) - Metro US

With a $70 Kit, This Startup Promises to Turn Anyone Into an Artist – Inc.

While on vacationfour years ago, Elad Katav decided to try to teach himself a new skill: painting. The software company COO had little artistic experience andthought it would be a good chance to clearhis mindand do something creative.He watched some tutorials on YouTube, picked up somesupplies at a local crafts store, andsat down with a photo of his 5-year-old son to try to paint a portrait.

A few days later, the canvas--half completed--sat at the bottom of a trash bin.An experience that Katavhad hoped would be therapeutic instead brought a lot of frustration.

Today, Katav is founder and CEOof Boston-based Cupixel, a startup that uses augmented reality to help people who aren't skilled artistssketch and paint. The company launched its first product, a $70 supply kit that's compatible with an app, on its website in January and quickly sold out of its inventory. In July, it launched on theHome Shopping Network's website, and Katav says the startup ison the verge of announcing partnerships with major brick-and-mortar retailers.

Katav previously served as COO of enterprise software company Correlsense. After his failed painting attempt, he believedthere was a business opportunity around the concept of helping non-artists create art--and, having a background in software, he decided the product should involve some advanced technology. He founded Cupixel in 2016 and soon raised$2 million in seed funding from private backers. After two years of developing the AR tech, he launched the product at CES this January. Katavdeclined to reveal the startup's revenue, but said the company sold out of its first batch of 1,000 kits within two months and hassince restocked with an additional 15,000 units.

For Katav, an Israeli immigrant with no artistic background, it's affirmation that there's a segment of the population who don'thave the natural ability to make artbut wantto. "Art creation has so many benefits," he says."It relaxes the body. It relaxes the mind. It gives you an opportunity to be creative. Yet it felt like this processwas closed off to people like me."

Cupixel's kit includes everything you need--canvas, pencils, paint, brushes, a frame--to producea hand-paintednine-by-nine-inch piece of artwork, aside from a smartphone or tablet. You start by choosing a workfrom Cupixel's online galleryor by uploading your own photo, whichthe software then converts into a sketchable image. On your device'sscreen, the image is divided into nine squares that correspond with the nine canvas tiles provided withthe kit. Youpoint your device's camera at the canvas, andon your screen, you see the image that you'll be tracing and painting. Using your pencil and brushes, you follow along with what's on the screen--an AR version ofpaint-by-numbers. When finished, you piece the nine squares together to form one larger one. Katav says the entire experiencetakes under two hours for most users.

Cupixel now has deals with more than 20 artists to include their work in its database. An artist receives a royalty each time his or her work is selected to be painted by a user.Katav says the startup is in the process of finalizing deals with two of the U.S.'s biggest arts and crafts retailers, though he declined to sharewhich ones. It's worth noting that one of Cupixel's board members is Lew Klessel, a managing director at private equity firmNew Mountain Capital and the former interim CEO of Michaels.

Cupixel's kit isn't the first AR product meant to help people create art. Lithuania-based SketchAR makes a$28 app that turns a phone into an AR device, overlaying a piece of paper or other canvas with a traceable sketch. Cupixel's product adds the painting aspect and includes the necessary supplies.

Katav'sgoal is to launchAR kits for other art forms likesculpting, woodcrafting, and paper crafting, though these three-dimensional processes clearly would be a bit more complex. Katav doesn't have a timeline yet, though he says the company has prototyped a paper-crafting AR product in its lab.

While the technology is exciting, Katav admits that some professional artists have pushed back about the ideaof using technology to turn just anyone into an artist. The founder objects to this sentiment. Instead, hecompares Cupixelto meal-in-a-box services that make cooking easier for those who lack the skills to do it all on their own.

"It doesn't make you a professional chef," he says. "But now you can participate in a beautiful process that you otherwise might not be able to."

Read more from the original source:

With a $70 Kit, This Startup Promises to Turn Anyone Into an Artist - Inc.

Scarlett Moffatt shares troll post and calls for people to be kinder – digitalspy.com

Former Gogglebox and I'm a Celebrity... Get Me Out of Here! star Scarlett Moffatt has hit out at online trolls by naming and shaming one of them.

The reality TV personality shared private Instagram messages from someone branding her a "fat bitch" and a "fat c**t", before uploading a series of videos to her Stories calling for people to be nicer online.

Scarlett said: "I just want to say... apologies that last post has swearing in. But I just think it's really important for people to see some of the daily messages that I get.

"I'm really pleased that Jesy Nelson done the show on bringing trolls to the forefront and making people understand the effect it can have on you, especially if you don't have a good support network around you, I can imagine it's really, really difficult."

Related: Love Island's Chris Hughes feuds with Katie Hopkins following Jesy Nelson's documentary

She continued: "People say because you're on the TV or if you're in the public eye then people are allowed an opinion.

"Yes, I would agree to that to an extent, but when people are using vile and bullying comments... at the end of the day, being a TV presenter is my job and if I was in any other job and I was getting emails like that from staff or from people, they would be getting warnings and sacked, something would be done about it.

"People need to remember it's okay having an opinion but when it's hurting people's feelings and when it's vile, abusive language, that's when it needs to stop!

"Educating children the majority of the time it's not children but if we start and drum this into kids young as they get older they'll understand that this is wrong. The majority of the comments I get are from around 25 to 50-year-old men, they seem to love calling me names, God knows why.

"People don't know what's happening in people's lives so you need to be a bit kinder."

Jesy Nelson: Odd One Out aired on BBC One, and is available to watch on BBC iPlayer now.

We would encourage anyone who identifies with the topics raised in this article to reach out. Organisations who can offer support include Samaritans on 116 123 (www.samaritans.org) or Mind on 0300 123 3393 (www.mind.org.uk).

Readers in the US are encouraged to visit mentalhealth.gov.

Want up-to-the-minute entertainment news and features? Just hit 'Like' on our Digital Spy Facebook page and 'Follow' on our @digitalspy Instagram and Twitter account.

See the original post here:

Scarlett Moffatt shares troll post and calls for people to be kinder - digitalspy.com

IGTV: What, Why and How You Should be Using it as a Marketing Tool – Business 2 Community

After a slow start for Instagram TV (IGTV), its now becoming an integral part of social media strategies. After announcing in February that Instagram would allow one minute previews of IGTV videos on the main Instagram feed, things started to change and views skyrocketed.

If you arent in on this yet, you should be. Heres what, why and how you should be using IGTV as a marketing tool!

Back in June of 2018, Instagram released IGTV as, a new app for watching long-form, vertical video from your favorite Instagram creators. In May 2019 they started allowing for landscape videos as well. So basically a platform to compete with Youtube.

Each creator has their own channel similar to TV. And just like a television, as soon as you access IGTV, videos start playing. Can you say hello increased engagement?!

Theres a stand alone IGTV app, but it can also be accessed straight from your Instagram app. Unlike regular Instagram videos, IGTV videos can be 10 minutes to an hour long.

IGTV can have major benefits for your clients business as well as your own. Its about creating value for your consumers. Here are some of the perks that come with IGTV:

Having another platform to share video content means another place to be seen. Which in turn means more engagement and more customers.

By 2020, the number of digital video viewers in the United States is projected to be more than 236 million. Thats a lot of potential people viewing your content.

Unlike videos uploaded straight to Instagram, your IGTV videos can be 10-60 minutes long. This makes it a great spot for how-tos, behind the scenes, story features and more!

While Instagram Stories and other platforms are better suited for posting things as they happen, IGTV needs to be well thought out and planned.

Think about your target audience. What do they want to see? What do they want from you? How can you help them?

Since Instagram announced in February that users could post one-minute previews of their IGTV videos straight to the news feed, viewership has increased. When theres a new video, your followers can tap straight from their feed to watch the full video.

Posting previews on Instagram and sending followers to your IGTV helps boost engagement.

Using IGTV can also unlock a form of the swipe up feature for your Instagram Stories (more about this later).

Side note: Videos posted on IGTV wont automatically upload to your Instagram. If you do want them to appear on your feed youll have to click Post a Preview under the title and description page when uploading your IGTV video.

You can access IGTV through the standalone app or through Instagram.

If you will be creating longer form videos, we suggest actually downloading the app. The set up and getting started is pretty straight forward with prompts. But heres a quick recap:

Webcast, October 15th: Your Baby is Ugly

There are a lot of different things you can take advantage of by using IGTV. Here are some of our favorites:

In a previous blog post, we discussed how Instagram Story swipe up features are only available to verified accounts or if you have over 10,000 followers. But theres a couple work arounds to add a link to your IG Story and IGTV is one of them.

This should be a video directing people to click somewhere on the screen that will essentially be taking them to a certain link. It could be a video of you pointing up somewhere or just a static video with arrows pointing to where you want them to tap.

One thing to keep in mind when making this is that IGTV videos need to be at least a minute long.

Your title can be whatever you want but we suggest using something like Click here for the link that reinforces your CTA. In the description, put the URL that you want to direct users. That is the most important part!

Once you upload your IGTV video, make your Story on Instagram and youll see the link icon in the options at the top. Dont get too excited, you cant directly link to the URL yet. But click the link and youll get the option to link to your IGTV video. Select your CTA video and post.

Now when people watch your story there will be an option to swipe up to watch on IGTV and from there the link will be directly clickable.

We talked about Sephora in a previous article, but honestly they are just the definition of using IGTV effectively as a marketing tool, so they are being highlighted again.

Makeup and hair tutorials, FAQs and how-tos deeply resonate with their 18.4 million followers. They take time to plan out each video and understand what their audience wants. While they are creating something helpful and entertaining, they are also driving sales. WIN-WIN!

You dont have to sell products or services to make use of IGTV and Lele Pons is a great example of this.

Pons is known for her comedic internet videos and has created somewhat of a tv show on her IGTV. She has episodes called Whats Cooking where she makes different food and brings on different guests. IGTV gave her the platform to open up a different part of her life.

So even if you arent selling anything, IGTV can still add value to your audience experience.

Heres all the details you need to know about sizing, timing and framing:

While getting started with IGTV might seem like a daunting task, the need for video in your social media strategy is too important to sit this out. So jump on the IGTV train and get to creating value for your customers!

See the original post here:

IGTV: What, Why and How You Should be Using it as a Marketing Tool - Business 2 Community

Steam’s wonderful Library Update beta is finally live: Here’s how to get it – PCWorld

As promised, Valve pushed the new Steam Library Update into open beta this morning. Quick access to your recently played games! More detailed Details pages! Better library search and filtering tools! Drag-and-drop! No bezels on the left and right edges! All the modern conveniences and quality of life upgrades that (if were honest) probably shouldve been in Steam already. But damn, theyre nice to have now.

You can read our detailed breakdown of the Library Update and all its featuresor you can simply install it for yourself. If youre keen on a refreshed Steam and dont mind the potential for a few bugs along the way, all it takes is an opt-in to get into the beta.

Its pretty easy. You can go click the big Join The Beta button if you want to feel official, but really all you need to do is open Steam, go to the Settings menu (under Steam), and look for a section on the Account page that says Beta Participation. Click Change, and then on the drop-down menu choose Steam Beta Update.

Restart Steam, and youre in. Youll know it worked immediately, because the familiar Store screen will now stretch all the way to the left and right edges of the window, no bezel. Most of the key features are over on the Library tab though. Thats where youll be greeted by the new Home page, the redesigned sidebar, and so forth.

Change is good, sometimes. Having lived through hundreds of interface changes across countless programs, I feared the worst. Valves PAX demo assuaged those fears somewhat, but you never really know what will annoy you until youve tested it yourself.

So far Im very impressed though. The new interface is clean and reactive, and Im finding the new organizational tools fun to mess around with. Im not going to spend a ton of time recapping because, as I said, you can read about everything at length in our longer (albeit hands-off) impressions.

But there are a few smaller features I hadnt noticed in Valves demo. I like for instance that you can quickly toggle Collections (which used to be Categories) on and off, flipping between your organized library and a simpler alphabetical list of games. Theres also a Ready to Play button in the top-left that will quickly omit any uninstalled games from the list. And even with more than 2,000 games in my library, these sorting changes are snappy.

I also like the Sort by Recent Activity button, which gives you a month-by-month breakdown of the games youve played this year, and then a yearly breakdown after that. It goes back ages, too. Curious what you were playing in 2014? Steam can now show you.

That said, there are a few weird issues. You cantor at least I cant find a wayto sort by size anymore, which is a problem in an era where game sizes are rapidly ballooning. I used to change to Steams old list view and sort by install size every year or so to do some housecleaning, uninstall that 100GB game I was never going to finish. The loss of that functionality is pretty painful.

[UPDATE: I found the "Size on Disk" sorting feature. It's hidden on the Home page, if you scroll down to the list of all your games, there's a drop-down "Sort By" menu. "Size on Disk" is under that. However, it's still a bit less useful than the old method as there's no way to separate games out by the drive they're installed on. For those of you with Steam libraries that span multiple drives, you'll now need to right-click your largest games, go to "Properties," and see where each is installed individually. Bit of a pain, though at least some of the sorting functionality is intact.]

And its a beta. Ive definitely seen some behind-the-scenes code today as Ive clicked around, with trading card messages especially susceptible to breaking. Valves also transitioning to new box art for every game in your library, but old games? Ones that will probably never be updated? They get that Vaseline-smear above and below, the same frosted window look people use when uploading vertical video to a horizontal aspect ratio site like YouTube. It looks kind-of ugly.

Still, I cant see any reason not to update. Steams been stagnant for ages now. Its refreshing to see large-scale library changes, especially since thats one of the areas where Valve has a clear lead over the competing Epic Games Store. As someone whos amassed thousands of games on Steam, its a relief to finally have some control over my backlogor at least the illusion of control.

See the original post here:

Steam's wonderful Library Update beta is finally live: Here's how to get it - PCWorld

Augmented Reality: Eight AR Marketing Applications For Brands In 2019 – Forbes

With a growing number of companies like Facebook developing augmented reality (AR) glasses, there's a good chance AR marketing will be one of 2019's hottest trends. When a global marketing leader like Facebook goes all-in on augmented reality, you know brand builders are likely to follow their lead.

Personally and through my work in this space, I've seen some amazing use cases for AR in terms of training applications that have been developed on the Microsoft Hololens to help aid frontline workers with design and manufacturing.

But how do you use augmented reality to market your business, when you're likely unfamiliar with this new technology? What do you need to know about AR marketing to make it work for your company?

If you're a brand builder who wants to put the power of augmented reality glasses and AR technology to work for your business, bear the following crucial tips in mind.

1. AR-Enabled Video

Augmented reality marketing can significantly increase your company's dwell time at live events and on your website. Consider uploading AR-enabled videos to your website and watching in real-time as potential customers interact with your creations. Forget video marketing -- as AR-enabled videos take hold, standard videos are likely to look pass compared to augmented reality videos.

When developing AR videos for web content, look for platforms that offer a fully integrated web API to allow you to take any 3D model and put it on the web, such as Vuforia or Turbosquid.

2. Wearable Technology

Wearable technology is likely to be a hot tool for those interested in using augmented reality marketing to grow their brands. Combine tools like smartwatches and voice-enabled wearables to take your AR marketing to the next level. Imagine a marketing thought leader like Gary Vaynerchuk giving a keynote speech while wearing a voice-activated wearable and augmented reality glasses to broadcast his speech in real-time. Brands may be able to tap into this content through web APIs, as mentioned above.

3. Experiential Marketing

Experiential marketing will flourish thanks to augmented reality glasses and AR technology. If you want to develop a solid relationship with your target customers, understand that offering a superior brand experience is the key to creating long-term relationships with your audience.

For example, my company recently worked with a whiskey brand to develop an AR experience. We prompted users to point their cameras on the whiskey bottle's label, which in turn showcased a history of the whiskey brand and the various types of quality available for consumers.

4. Mixed-Media Marketing

Big-name brands such as Starbucks, Volvo and Walmart have already begun experimenting with augmented reality marketing. Industry thought leaders are realizing mixed reality marketing, virtual reality marketing and AR-enabled customer outreach is where the advertising and marketing industry is heading. Thanks to tools like Apple's ARKit, even small business owners can follow the lead of industry thought leaders and put augmented reality marketing to work for their brands.

Brands looking to experiment with immersive technology can download ARKit and convert their marketing assets to 3D models, then import them into XCode in order to showcase 3D assets in real-time.

5. AR Interfaces

Augmented reality can be used in a variety of ways to increase brand awareness and drive sales. For example, include an AR interface next to your point of sale terminal, or AR interfaces that allow customers to try products prior to purchasing. Brands are using AR technology for everything from AR-enabled online content to AR-enabled interfaces at their trade shows and conferences.

Simply having a tablet in retail locations loaded up with 3D assets of your brand's products can allow you to showcase digital assets even when they aren't physically available.

6. AR Thought Leadership

The AR sector is expected to explode as more companies realize the possible uses for this technology. Sectors like manufacturing are already using augmented reality technology to design, build and test products. The sooner brand marketers realize the numerous ways they can use augmented reality technology to connect with their audiences, the sooner they can build reputations as marketing thought leaders.

Some of the best ways brands can get the word out about their immersive technology initiatives are to showcase them live in person at trade shows. This gives your prospective customers a first-hand perspective and engages them in a unique and meaningful way. This works especially well for products that are large and cumbersome and would be too expensive to travel with in order to showcase.

7. AR-Enabled Ads

Augmented reality technology can be used for everything from product testing to advertising. Expect to see a growing number of advertising networks and social media platforms begin to offer AR-enabled ads. When you consider potential customers being able to interact with your AR-enabled ads on social networks like Facebook, you begin to see the potential of augmented reality as a customer acquisition tool.

Including a call to action and a touchpoint or hyperlink within your AR application will give customers a direct link to sales channels for those engagements.

8. A Growing Trend Across Industries

A growing number of industries are expected to capitalize on augmented reality marketing. From the fashion and beauty sector to automotive and travel industries, augmented reality marketing is likely to be a hot trend across numerous verticals in the coming years. Understanding how marketing and advertising are changing will allow savvy brand builders to maximize their customer outreach efforts; this is especially true in the technology sector where brands are constantly looking to innovate.

If you are a business builder intrigued by the move toward augmented reality, it is imperative you start developing an augmented reality marketing strategy now. Creating a detailed plan of action to capitalize on AR technology can help you develop a sizable lead over your competitors while building your reputation as an industry thought leader at the same time.

Will your company be integrating augmented reality into your overall marketing strategy in 2019?

Read the original here:

Augmented Reality: Eight AR Marketing Applications For Brands In 2019 - Forbes

Love Islands Arabella Chi says I love you to new boyfriend Wes Nelson after just two months of dating – The Sun

LOVE Island's Arabella Chi has taken her relationship with boyfriend Wes Nelson to the next level by admitting she loves him online.

The model, 28, was caught declaring her love for the 21-year-old on Instagram despite only going official with the former Islander two months ago.

6

Commenting on a snap of the pair posted to Wes' account, she wrote: "I love you".

And while some fans thought the gushing was "cute", others weren't as convinced.

One commented: "A bit soon for that surely".

Another wrote: "So soon?"

6

6

6

The Sun exclusively revealed thatArabella and Wes were growing closeback in June, with an insider dishing that there is was "natural connection" between them.

They were then spottedstepping out to the shops after spending the night together, and then enjoyed aromantic break to Ibiza where they packed on the PDA.

But it wasn't long before the pair made things Instagram official by uploading a loved-up snap online.

Wes previously datedMegan Barton-Hansonafter meeting on last year's Love Island, however they split up earlier this year.

6

6

She has since enjoyed a brief romance with her Celebs Go Dating co-star, Demi Sims, and is currently dating singer Chelsee Grimes.

Meanwhile, Arabella was linked to co-star Danny Williams during her time in the villa, as well as former show star Charlie Frederick.

Charlie, who starred in the 2018 series, took to Instagram shortly after Arabella's Love Island debut to reveal that their romance was "getting back on track".

Exclusive

5, 6, 7, SKATE Dancing On Ice in talks with H from Steps for new series

VITAMIN D Ex On The Beach star Sean leaves fans gobsmacked as he shows off HUGE bulge

Exclusive

LOVEHANDLES ISLAND Curtis Pritchard to shift Love Island lard as face of Weight Watchers

martha-chef About Martha Reeves - Celebrity MasterChef 2019 contestant and Motown star

bolton's brainiac Who is The Chase star Jenny Ryan, when is she on Celebrity MasterChef?

in the mix Who is Judge Jules, when is he on Celebrity MasterChef and is he married?

He said: "Let me set the record straight, me andArabellawere never boyfriend and girlfriend, we were seeing each other, we were getting together.

"We were working out what was going on, what we both wanted etc, we were never girlfriend and boyfriend.

"But I don't know what's worse really, telling someone you want them in your life and then disappearing into the villa... mind blown."

Got a story? email digishowbiz@the-sun.co.uk or call us direct on 02077824220.

We pay for videos too. Click here to upload yours.

Original post:

Love Islands Arabella Chi says I love you to new boyfriend Wes Nelson after just two months of dating - The Sun

Looking To Sell Smartphone? Here’s What You Need to Know – Updato

Selling a smartphone sounds easy until you actually reach a point when it must be done. It's only then when we truly realize how much needs to be done.

If this is the first time that you're doing this, then the whole process can definitely feel overwhelming. That's why we decided to do create a small guide. Not only for beginners, but also for intermediates who wanna learn a thing or two.

With all that being said, let's get right into it!

Selling a phone requires you to list as much information as you can about it. What specs does it have? How large is its screen? What resolution? How much storage? How about its overall condition?

Sure, the buyer can easily find most of that information with a Google search. But, so can you. And if that puts you one step ahead compared to other people who are trying to sell the same phone at the same price, then why not take it?

This is a somewhat good example where the seller mentions not only the specs but also the overall condition of the device. If you're trying to sell smartphone on eBay or another similar website, then mentioning that kind of information will make it more likely for you to sell your device.

If you don't remember or know your phone's specifications, then you should be able to easily find them online with a quick Google search. Also, don't forget to mention if your phone is unlocked or not.

As for describing the overall condition of the device, well, that's mostly your decision to make.

Smartphones have become a huge part of our lives. Each and every device holds a ton of personal information in it.

Photos, videos, documents, contacts, and so much more. We obviously can't trust a stranger with all that information.

So, once you know the specifications of your phone, you must start preparing it so that it can be delivered in a brand-new like condition. That process involves:

There are tons of ways to backup important data. Be it contacts, pictures, or whatever.

Two of the most popular ones is either keeping everything on your computer's drive (Locally) or uploading it to something like Google Drive (Cloud). The choice is yours to make, really.

Just do keep in mind that cloud backups can get extremely slow; depending on your internet connection. However, they are also more reliable than local backups. So, weigh your options.

To make a local backup:

This is what you should see on the computer. After that, all you need to do is copy whatever you want by using the PC and keep it in a safe place. The best option is probably a separate folder for backups.

As for using Google Drive:

This is how our drive looks. But, you can manage your stuff in any way you see fit.

Do keep in mind that the free plan of Google Drive only delivers 15 gigabytes of storage. If you want more, then the only option is looking into paid subscriptions or going back to local backups.

Now that all the data is safe and ready to be moved to your new phone, it's time to erase everything from the device that's about to be sold. This will allow you to sell smartphone in a brand-new like condition.

There are primarily two ways of doing that:

Though, do keep in mind that you must log out from your Google account first. If you don't, the phone will be locked and the buyer will be forced to ask for your Google password.

If that happens, you'll be forced to reset your password after that and nobody wants that. So, to remove your Google account, go to:

Now that this is out of the way, we can proceed with the factory reset. To do it through the phone's settings, go to:

That being said, if you want the buyer to feel as if he is powering up the phone for the first time ever, then your best bet is using the recovery mode:

This is for TWRP recovery. But, it's more or less the same thing for stock recoveries as well.

Now you're almost ready to sell smartphone.

Sure. We're all expecting to find some signs of use on a used phone. But, if you keep it as clean as possible, then that increases your odds of being able to sell it. After all, you'll likely have to post some pictures anyway.

There's not much to teach you here. Using something like a simple microfiber cloth should be more than enough to get rid of fingerprints and stuff like that.

You can drop a tiny bit of water on it to clean while using the dry part to remove wet marks.

If you want to sell smartphone, sure, you could just drop it into a cardboard box, ship it, and be done with it.

But, the reality of the situation is that if you do that, then chances are that the device is going to get damaged on the way and your buyer isn't going to be happy with that.

In cases like that, they'll either ask for a refund, they'll leave negative feedback, or both. No matter how you look at it, skimping on safety isn't worth it.

So, make sure to add a bit of antistatic bubble wrap around the phone to protect it. Not sure if it needs to be antistatic, but, hey, better safe than sorry, right?

It'll be for the best if you can also find its original box and accessories.

Now that you're ready to sell smartphone, it's time to select a good platform. And there are plenty of them to choose from.

Some of the most popular ones include:

If you're living in the US, then do definitely consider Swappa as they are focused on that field of work.

Otherwise, eBay is probably your next best option since it's extremely popular. Other than that, do definitely consider checking out some local platforms that are focused specifically in your region. That makes it easier for a potential buyer to spot your offer.

Now that everything is said and done, all you have to do is post your offer, wait for someone to see it, then head over to your local office post or courier and send it over to the buyer. Simple as that.

That's all for now. Hopefully, that helped you out. If there are any questions or something that we can help with, then let us know about it in the comments.

Feel like we forgot to mention something important? Got anything wrong? Then let us and everyone else know about it in the comments section down below!

Like what you see? Then don't forget to follow us on Facebook and Twitter for the latest news, reviews, listicles, apps, games, devices, how-to guides, and more!

Read the original post:

Looking To Sell Smartphone? Here's What You Need to Know - Updato

The virtual afterlife will transform humanity | Aeon Essays

In the late 1700s, machinists started making music boxes: intricate little mechanisms that could play harmonies and melodies by themselves. Some incorporated bells, drums, organs, even violins, all coordinated by a rotating cylinder. The more ambitious examples were Lilliputian orchestras, such as the Panharmonicon, invented in Vienna in 1805, or the mass-produced Orchestrion that came along in Dresden in 1851.

But the technology had limitations. To make a convincing violin sound, one had to create a little simulacrum of a violin quite an engineering feat. How to replicate a trombone? Or an oboe? The same way, of course. The artisans assumed that an entire instrument had to be copied in order to capture its distinctive tone. The metal, the wood, the reed, the shape, the exact resonance, all of it had to be mimicked. How else were you going to create an orchestral sound? The task was discouragingly difficult.

Then, in 1877, the American inventor Thomas Edison introduced the first phonograph, and the history of recorded music changed. It turns out that, in order to preserve and recreate the sound of an instrument, you dont need to know everything about it, its materials or its physical structure. You dont need a miniature orchestra in a cabinet. All you need is to focus on the one essential part of it. Record the sound waves, turn them into data, and give them immortality.

Imagine a future in which your mind never dies. When your body begins to fail, a machine scans your brain in enough detail to capture its unique wiring. A computer system uses that data to simulate your brain. It wont need to replicate every last detail. Like the phonograph, it will strip away the irrelevant physical structures, leaving only the essence of the patterns. And then there is a second you, with your memories, your emotions, your way of thinking and making decisions, translated onto computer hardware as easily as we copy a text file these days.

That second version of you could live in a simulated world and hardly know the difference. You could walk around a simulated city street, feel a cool breeze, eat at a caf, talk to other simulated people, play games, watch movies, enjoy yourself. Pain and disease would be programmed out of existence. If youre still interested in the world outside your simulated playground, you could Skype yourself into board meetings or family Christmas dinners.

This vision of a virtual-reality afterlife, sometimes called uploading, entered the popular imagination via the short story The Tunnel Under the World (1955) by the American science-fiction writer Frederik Pohl, though it also got a big boost from the movie Tron (1982). Then The Matrix (1999) introduced the mainstream public to the idea of a simulated reality, albeit one into which real brains were jacked. More recently, these ideas have caught on outside fiction. The Russian multimillionaire Dmitry Itskov made the news by proposing to transfer his mind into a robot, thereby achieving immortality. Only a few months ago, the British physicist Stephen Hawking speculated that a computer-simulated afterlife might become technologically feasible.

It is tempting to ignore these ideas as just another science-fiction trope, a nerd fantasy. But something about it wont leave me alone. I am a neuroscientist. I study the brain. For nearly 30 years, Ive studied how sensory information gets taken in and processed, how movements are controlled and, lately, how networks of neurons might compute the spooky property of awareness. I find myself asking, given what we know about the brain, whether we really could upload someones mind to a computer. And my best guess is: yes, almost certainly. That raises a host of further questions, not least: what will this technology do to us psychologically and culturally? Here, the answer seems just as emphatic, if necessarily murky in the details.

It will utterly transform humanity, probably in ways that are more disturbing than helpful. It will change us far more than the internet did, though perhaps in a similar direction. Even if the chances of all this coming to pass were slim, the implications are so dramatic that it would be wise to think them through seriously. But Im not sure the chances are slim. In fact, the more I think about this possible future, the more it seems inevitable.

If did you want to capture the music of the mind, where should you start? A lot of biological machinery goes into a human brain. A hundred billion neurons are connected in complicated patterns, each neurone constantly taking in and sending signals. The signals are the result of ions leaking in and out of cell membranes, their flow regulated by tiny protein pores and pumps. Each connection between neurons, each synapse, is itself a bewildering mechanism of proteins that are constantly in flux.

It is a daunting task just to make a plausible simulation of a single neurone, though this has already been done to an approximation. Simulating a whole network of interacting neurons, each one with truly realistic electrical and chemical properties, is beyond current technology. Then there are the complicating factors. Blood vessels react in subtle ways, allowing oxygen to be distributed more to this or that part of the brain as needed. There are also the glia, tiny cells that vastly outnumber neurons. Glia help neurons function in ways that are largely not understood: take them away and none of the synapses or signals work properly. Nobody, as far as I know, has tried a computer simulation of neurons, glia, and blood flow. But perhaps they wouldnt have to. Remember Edisons breakthrough with the phonograph: to faithfully replicate a sound, it turns out you dont also have to replicate the instrument that originally produced it.

So what is the right level of detail to copy if you want to capture a persons mind? Of all the biological complexity, what patterns in the brain must be reproduced to capture the information, the computation, and the consciousness? One of the most common suggestions is that the pattern of connectivity among neurons contains the essence of the machine. If you could measure how each neurone connects to its neighbours, youd have all the data you need to re-create that mind. An entire field of study has grown up around neural network models, computer simulations of drastically simplified neurons and synapses. These models leave out the details of glia, blood flow, membranes, proteins, ions and so on. They only consider how each neurone is connected to the others. They are wiring diagrams.

Simple computer models of neurons, hooked together by simple synapses, are capable of enormous complexity. Such network models have been around for decades, and they differ in interesting ways from standard computer programs. For one thing, they are able to learn, as neurons subtly adjust their connections to each other. They can solve problems that are difficult for traditional programs, and are particularly good at taking noisy input and compensating for the noise. Give a neural net a fuzzy, spotty photograph, and it might still be able to categorise the object depicted, filling in the gaps and blips in the image something called pattern completion.

Despite these remarkably human-like capacities, neural network models are not yet the answer to simulating a brain. Nobody knows how to build one at an appropriate scale. Some notable attempts are being made, such as the Blue Brain project and its successor, the EU-funded Human Brain Project, both run by the Swiss Federal Institute of Technology in Lausanne. But even if computers were powerful enough to simulate 100 billion neurons and computer technology is pretty close to that capability the real problem is that nobody knows how to wire up such a large artificial network.

In some ways, the scientific problem of understanding the human brain is similar to the problem of human genetics. If you want to understand the human genome properly, an engineer might start with the basic building blocks of DNA and construct an animal, one base pair at a time, until she has created something human-like. But given the massive complexity of the human genome more than 3 billion base pairs that approach would be prohibitively difficult at the present time. Another approach would be to read the genome that we already have in real people. It is a lot easier to copy something complicated than to re-engineer it from scratch. The human genome project of the 1990s accomplished that, and even though nobody really understands it very well, at least we have a lot of copies of it on file to study.

The same strategy might be useful on the human brain. Instead of trying to wire up an artificial brain from first principles, or training a neural network over some absurdly long period until it becomes human-like, why not copy the wiring already present in a real brain? In 2005, two scientists, Olaf Sporns, professor of brain sciences at Indiana University, and Patric Hagmann, neuroscientist at the University of Lausanne, independently coined the term connectome to refer to a map or wiring diagram of every neuronal connection in a brain. By analogy to the human genome, which contains all the information necessary to grow a human being, the human connectome in theory contains all the information necessary to wire up a functioning human brain. If the basic premise of neural network modelling is correct, then the essence of a human mind is contained in its pattern of connectivity. Your connectome, simulated in a computer, would recreate your conscious mind.

It seems a no-brainer (excuse the pun) that we will be able to scan, map, and store the data on every neuronal connection within a persons head

Could we ever map a complete human connectome? Well, scientists have done it for a roundworm. Theyve done it for small parts of a mouse brain. A very rough, large-scale map of connectivity in the human brain is already available, though nothing like a true map of every idiosyncratic neurone and synapse in a particular persons head. The National Institutes of Health in the US is currently funding the Human Connectome Project, an effort to map a human brain in as much detail as possible. I admit to a certain optimism toward the project. The technology for brain scanning improves all the time. Right now, magnetic resonance imaging (MRI) is at the forefront. High-resolution scans of volunteers are revealing the connectivity of the human brain in more detail than anyone ever thought possible. Other, even better technologies will be invented. It seems a no-brainer (excuse the pun) that we will be able to scan, map, and store the data on every neuronal connection within a persons head. It is only a matter of time, and a timescale of five to 10 decades seems about right.

Of course, nobody knows if the connectome really does contain all the essential information about the mind. Some of it might be encoded in other ways. Hormones can diffuse through the brain. Signals can combine and interact through other means besides synaptic connections. Maybe certain other aspects of the brain need to be scanned and copied to make a high-quality simulation. Just as the music recording industry took a century of tinkering to achieve the impressive standards of the present day, the mind-recording industry will presumably require a long process of refinement.

That wont be soon enough for some of us. One of the basic facts about people is that they dont like to die. They dont like their loved ones or their pets to die. Some of them already pay enormous sums to freeze themselves, or even (somewhat gruesomely) to have their corpses decapitated and their heads frozen on the off-chance that a future technology will successfully revive them. These kinds of people will certainly pay for a spot in a virtual afterlife. And as the technology advances and the public starts to see the possibilities, the incentives will increase.

One might say (at risk of being crass) that the afterlife is a natural outgrowth of the entertainment industry. Think of the fun to be had as a simulated you in a simulated environment. You could go on a safari through Middle Earth. You could live in Hogwarts, where wands and incantations actually do produce magical results. You could live in a photogenic, outdoor, rolling country, a simulation of the African plains, with or without the tsetse flies as you wish. You could live on a simulation of Mars. You could move easily from one entertainment to the next. You could keep in touch with your living friends through all the usual social media.

I have heard people say that the technology will never catch on. People wont be tempted because a duplicate of you, no matter how realistic, is still not you. But I doubt that such existential concerns will have much of an impact once the technology arrives. You already wake up every day as a marvellous copy of a previous you, and nobody has paralysing metaphysical concerns about that. If you die and are replaced by a really good computer simulation, itll just seem to you like you entered a scanner and came out somewhere else. From the point of view of continuity, youll be missing some memories. If you had your annual brain-backup, say, eight months earlier, youll wake up missing those eight months. But you will still feel like you, and your friends and family can fill you in on what you missed. Some groups might opt out the Amish of information technology but the mainstream will presumably flock to the new thing.

And then what? Well, such a technology would change the definition of what it means to be an individual and what it means to be alive. For starters, it seems inevitable that we will tend to treat human life and death much more casually. People will be more willing to put themselves and others in danger. Perhaps they will view the sanctity of life in the same contemptuous way that the modern e-reader crowd views old fogeys who talk about the sanctity of a cloth-bound, hardcover book. Then again, how will we view the sanctity of digital life? Will simulated people, living in an artificial world, have the same human rights as the rest of us? Would it be a crime to pull the plug on a simulated person? Is it ethical to experiment on simulated consciousness? Can a scientist take a try at reproducing Jim, make a bad copy, casually delete the hapless first iteration, and then try again until he gets a satisfactory version? This is just the tip of a nasty philosophical iceberg we seem to be sailing towards.

In many religions, a happy afterlife is a reward. In an artificial one, due to inevitable constraints on information processing, spots are likely to be competitive. Who decides who gets in? Do the rich get served first? Is it merit-based? Can the promise of resurrection be dangled as a bribe to control and coerce people? Will it be withheld as a punishment? Will a special torture version of the afterlife be constructed for severe punishment? Imagine how controlling a religion would become if it could preach about an actual, objectively provable heaven and hell.

Then there are the issues that will arise if people deliberately run multiple copies of themselves at the same time, one in the real world and others in simulations. The nature of individuality, and individual responsibility, becomes rather fuzzy when you can literally meet yourself coming the other way. What, for instance, is the social expectation for married couples in a simulated afterlife? Do you stay together? Do some versions of you stay together and other versions separate?

If a brain has been replaced by a few billion lines of code, we might understand how to edit any destructive emotions right out of it

Then again, divorce might seem a little melodramatic if irreconcilable differences become a thing of the past. If your brain has been replaced by a few billion lines of code, perhaps eventually we will understand how to edit any destructive emotions right out of it. Or perhaps we should imagine an emotional system that is standard-issue, tuned and mainstreamed, such that the rest of your simulated mind can be grafted onto it. You lose the battle-scarred, broken emotional wiring you had as a biological agent and get a box-fresh set instead. This is not entirely far-fetched; indeed, it might make sense on economic rather than therapeutic grounds. The brain is roughly divisible into a cortex and a brainstem. Attaching a standard-issue brainstem to a persons individualised, simulated cortex might turn out to be the most cost-effective way to get them up and running.

So much for the self. What about the world? Will the simulated environment necessarily mimic physical reality? That seems the obvious way to start out, after all. Create a city. Create a blue sky, a pavement, the smell of food. Sooner or later, though, people will realise that a simulation can offer experiences that would be impossible in the real world. The electronic age changed music, not merely mimicking physical instruments but offering new potentials in sound. In the same way, a digital world could go to some unexpected places.

To give just one disorientating example, it might include any number of dimensions in space and time. The real world looks to us to have three spatial dimensions and one temporal one, but, as mathematicians and physicists know, more are possible. Its already possible to programme a video game in which players move through a maze of four spatial dimensions. It turns out that, with a little practice, you can gain a fair degree of intuition for the four-dimensional regime (I published a study on this in the Journal of Experimental Psychology in 2008). To a simulated mind in a simulated world, the confines of physical reality would become irrelevant. If you dont have a body any longer, why pretend?

All of the changes described above, as exotic as they are and disturbing as some of them might seem, are in a sense minor. They are about individual minds and individual experiences. If uploading were only a matter of exotic entertainment, literalising peoples psychedelic fantasies, then it would be of limited significance. If simulated minds can be run in a simulated world, then the most transformative change, the deepest shift in human experience, would be the loss of individuality itself the integration of knowledge into a single intelligence, smarter and more capable than anything that could exist in the natural world.

You wake up in a simulated welcome hall in some type of simulated body with standard-issue simulated clothes. What do you do? Maybe you take a walk and look around. Maybe you try the food. Maybe you play some tennis. Maybe go watch a movie. But sooner or later, most people will want to reach for a cell phone. Send a tweet from paradise. Text a friend. Get on Facebook. Connect through social media. But here is the quirk of uploaded minds: the rules of social media are transformed.

Real life, our life, will shrink in importance until it becomes a kind of larval phase

In the real world, two people can share experiences and thoughts. But lacking a USB port in our heads, we cant directly merge our minds. In a simulated world, that barrier falls. A simple app, and two people will be able to join thoughts directly with each other. Why not? Its a logical extension. We humans are hyper-social. We love to network. We already live in a half-virtual world of minds linked to minds. In an artificial afterlife, given a few centuries and few tweaks to the technology, what is to stop people from merging into berpeople who are combinations of wisdom, experience, and memory beyond anything possible in biology? Two minds, three minds, 10, pretty soon everyone is linked mind-to-mind. The concept of separate identity is lost. The need for simulated bodies walking in a simulated world is lost. The need for simulated food and simulated landscapes and simulated voices disappears. Instead, a single platform of thought, knowledge, and constant realisation emerges. What starts out as an artificial way to preserve minds after death gradually takes on an emphasis of its own. Real life, our life, shrinks in importance until it becomes a kind of larval phase. Whatever quirky experiences you might have had during your biological existence, they would be valuable only if they can be added to the longer-lived and much more sophisticated machine.

I am not talking about utopia. To me, this prospect is three parts intriguing and seven parts horrifying. I am genuinely glad I wont be around. This will be a new phase of human existence that is just as messy and difficult as any other phase has been, one as alien to us now as the internet age would have been to a Roman citizen 2,000 years ago; as alien as Roman society would have been to a Natufian hunter-gatherer 10,000 years before that. Such is progress. We always manage to live more-or-less comfortably in a world that would have frightened and offended the previous generations.

Visit link:

The virtual afterlife will transform humanity | Aeon Essays

Your mind will not be uploaded Soft Machines

The recent movie Transcendence will not be troubling the sci-fi canon of classics, if the reviews are anything to go by. But its central plot device uploading a human consciousness to a computer remains both a central aspiration of transhumanists, and a source of queasy fascination to the rest of us. The idea is that someones mind is simply a computer programme, that in the future could be run on a much more powerful computer than a brain, just as one might run an old arcade game on a modern PC in emulation mode. Mind uploading has a clear appeal for people who wish to escape the constraints of our flesh and blood existence, notably the constraint of our inevitable mortality.

In this post I want to consider two questions about mind uploading, from my perspective as a scientist. Im going to use as an operational definition of uploading a mind the requirement that we can carry out a computer simulation of the activity of the brain in question that is indistinguishable in its outputs from the brain itself. For this, we would need to be able to determine the state of an individuals brain to sufficient accuracy that it would be possible to run a simulation that accurately predicted the future behaviour of that individual and would convince an external observer that it faithfully captured the individuals identity. Im entirely aware that this operational definition already glosses over some deep conceptual questions, but its a good concrete starting point. My first question is whether it will be possible to upload the mind of anyone reading this now. My answer to this is no, with a high degree of probability, given what we know now about how the brain works, what we can do now technologically, and what technological advances are likely in our lifetimes. My second question is whether it will ever be possible to upload a mind, or whether there is some point of principle that will always make this impossible. Im obviously much less certain about this, but I remain sceptical.

This will be a long post, going into some technical detail. To summarise my argument, I start by asking whether or when it will be possible to map out the wiring diagram of an individuals brain the map of all the connections between its 100 billion or so neurons. Well probably be able to achieve this mapping in the coming decades, but only for a dead and sectioned brain; the challenges for mapping out a living brain at sub-micron scales look very hard. Then well ask some fundamental questions about what it means to simulate a brain. Simulating brains at the levels of neurons and synapses requires the input of phenomenological equations, whose parameters vary across the components of the brain and change with time, and are inaccessible to in-vivo experiment. Unlike artificial computers, there is no clean digital abstraction layer in the brain; given the biological history of nervous systems as evolved, rather than designed, systems, theres no reason to expect one. The fundamental unit of biological information processing is the molecule, rather than any higher level structure like a neuron or a synapse; molecular level information processing evolved very early in the history of life. Living organisms sense their environment, they react to what they are sensing by changing the way they behave, and if they are able to, by changing the environment too. This kind of information processing, unsurprisingly, remains central to all organisms, humans included, and this means that a true simulation of the brain would need to be carried out at the molecular scale, rather than the cellular scale. The scale of the necessary simulation is out of reach of any currently foreseeable advance in computing power. Finally I will conclude with some much more speculative thoughts about the central role of randomness in biological information processing. Ill ask where this randomness comes from, finding an ultimate origin in quantum mechanical fluctuations, and speculate about what in-principle implications that might have on the simulation of consciousness.

Why would people think mind uploading will be possible in our lifetimes, given the scientific implausibility of this suggestion? I ascribe this to a combination of over-literal interpretation of some prevalent metaphors about the brain, over-optimistic projections of the speed of technological advance, a lack of clear thinking about the difference between evolved and designed systems, and above all wishful thinking arising from peoples obvious aversion to death and oblivion.

On science and metaphors

I need to make a couple of preliminary comments to begin with. First, while Im sure theres a great deal more biology to learn about how the brain works, I dont see yet that theres any cause to suppose we need fundamentally new physics to understand it. Of course, new discoveries may change everything, but it seems to me that the physics weve got is quite complicated enough, and this discussion will be couched entirely in currently known, fundamentally physicalist, principles.

The second point is that, to get anywhere in this discussion, were going to need to immunise ourselves against the way in which almost all popular discussion of neuroscience is carried out in metaphorical language. Metaphors used clearly and well are powerful aids to understanding, but when we take them too literally they can be badly misleading. Its an interesting historical reflection that when computers were new and unfamiliar, the metaphorical traffic led from biological brains to electronic computers. Since computers were popularly described as electronic brains, its not surprising that biological metaphors like memory were quickly naturalised in the way computers were described. But now the metaphors go the other way, and we think about the brain as if it were a computer (I think the brain is a computer, by the way, but its a computer thats so different to man-made ones, so plastic and mutable, so much immersed in and responsive to its environment, that comparisons with the computers we know about are bound to be misleading). So if what we are discussing is how easy or possible it will be to emulate the brain with a man-made computer, the fact that we are so accustomed to metaphorical descriptions of brains in terms of man-made computers will naturally bias us to positive answers. Its too easy to move from saying a neuron is analogous to a simple combination of logic gates in a computer, say, to thinking that it can be replaced by one. A further problem is that many of these metaphors are now so stale and worn out that they have lost all force, and the substance of the original comparison has been forgotten. We often hear, for example, the assertion that some characteristic or other is hard-wired in the brain, but if one stops to think what an animals brain looks and feels like theres nothing much hard about it. Its a soft machine.

Mapping the brains wiring diagram

One metaphor that is important is the idea that the brain has a wiring diagram. The human brain has about 100 billion neurons, each of which is connected to many others by thin fibres the axons and dendrites along which electrical signals pass. Theres about 100,000 miles of axon in a brain, connecting at between a hundred to a thousand trillion synaptic connections. Its this pattern of connectivity between the neurons through the axons and dendrites that constitutes the wiring diagram of the brain. Ill argue below that knowing this wiring diagram is not yet a sufficient condition for simulating the operation of a brain it must surely, however, be a necessary one.

So far, scientists have successfully mapped out the wiring diagram of one organisms nervous system the microscopic worm C. elegans, which has a total of 300 neurons. This achievement was itself a technical tour-de-force, which illustrates what would need to be done to determine the immeasurably more complex wiring diagram of the human brain. The issue is that these fibres are thin (hundreds of nanometers, for the thinnest of them), very densely packed, and the fibres from a single neuron can pervade a very large volume (this review in Science The Big and the Small: Challenges of Imaging the Brains Circuits ($) is an excellent up-to-date overview of whats possible now and what the challenges are). Currently electron microscopy is required to resolve the finest connections, and this can only be done on thin sections. Although new high resolution imaging techniques may well be developed, its difficult to see how this requirement to image section by section will go away. Magnetic resonance imaging, on the other hand, can image an intact brain, but at much lower resolution more like millimetres than nanometers. The resolution of MRI derives from the strength of the magnetic field gradient you can sustain. You can have a large gradient over a small volume but if youre constrained to keep the brain intact that provides quite a hard limit.

Proponents of mind uploading who recognise these difficulties at this point resort to the idea of nanobots crawling through the brain, reading it from the inside. Ive discussed at length why I think it will be very much more difficult than people think to create such nanobots, for example in my article Rupturing the Nanotech Rapture, and in Nanobots, nanomedicine, Kurzweil, Freitas and Merkle I discuss why I dont think the counter-arguments of their proponents are convincing.

Mapping out all the neural connections of a human brain, then, will be difficult. It probably will be done, on a timescale perhaps of decades. The big but, though, is that this mapping will be destructive, and the brain it is done on will be definitively dead before the process starts. And massive job though it will be to map out this micro-scale connectome, theres something very important it doesnt tell you the difference between a live brain and a dead lump of meat that is what the initial electrical state of the brain is, where the ion gradients are, what the molecules are doing. But more on molecules later

Modelling, simulation, emulation: why mind uploading might make sense if you believed in intelligent design

If you did have a map of all the neural connections of a human brain, dead or alive, is that enough to simulate it? You could combine the map with known equations for the propagation of electrical signals along axons (the Hodgkin-Huxley equations), models of neurons and models for the behaviour of synapses. This is the level of simulation, for example, carried out in the Blue Brain project (see this review (PDF) for a semi-technical overview). This is a very interesting thing to do from the point of neuroscience, but it is not a simulation of a human brain, and certainly not of any individuals brain. Its a model, which aggregates phenomenological descriptions of the collective behaviours and interactions of components like the many varieties of voltage gated ion channels and the synaptic vesicles. The equations youd use to model an individual synapse, for example, would have different parameters for different synapses, and these parameters change with time (and in response to the information being processed). Without an understanding of whats going on in the neuron at the molecular level, these are parameters you would need to measure experimentally for each synapse.

An analogy might make this clearer. Let me ask this question: is it possible to simulate the CPU in your mobile phone? At first sight this seems a stupid question of course one can predict with a very high degree of certainty what the outputs of the CPU would be for any given set of inputs. After all, the engineers at ARM will have done just such simulations before any of the designs had even been manufactured, using well-understood and reliable design software. But a sceptical physicist might point out that every CPU is different at the atomic level, due to the inherent finite tolerances of manufacturing, and in any case the scale of the system is much too large to be able to simulate at the quantum mechanical level that would be needed to capture the electronic characteristics of the device.

In this case, of course, the engineers are right, for all practical purposes. This is because the phenomenology that predicts the behaviour of individual circuit elements is well-understood in terms of the physics, and the way these elements behave is simple, reliable and robust robust in the sense that quite a lot of variation in the atomic configuration produces the same outcomes. We can think of the system as having three distinct levels of description. There is the detailed level of what the electrons and ions are doing, which would account for the basic electrical properties of the component semiconductors and insulators, and the junctions and interfaces between them. Then there is the behaviour of the circuit elements that are built from these materials the current-voltage characteristics of the field effect transistors, and the way these components are built up into circuits. And finally, there is a description at a digital level, in which logical operations are implemented. Once one has designed circuit elements with clear thresholds and strongly non-linear behaviour, one can rely on there being a clean separation between the digital and physical levels. Its this clean separation between the physical and the digital that makes the job of emulating the behaviour of one type of CPU on another one relatively uncomplicated.

But this separation between the physical and the digital in an integrated circuit isnt an accident or something pre-ordained it happens because weve designed it to be that way. For those of us who dont accept the idea of intelligent design in biology, thats not true for brains. There is no clean digital abstraction layer in a brain why should there be, unless someone designed it that way? In a brain, for example, the digital is continually remodelling the physical we see changes in connectivity and changes in synaptic strength as a consequence of the information being processed, changes, that as we see, are the manifestation of substantial physical changes, at the molecular level, in the neurons and synapses.

The unit of biological information processing is the molecule

Is there any general principle that underlies biological information processing, in the brain and elsewhere, that would help us understand what ionic conduction, synaptic response, learning and so on have in common? I believe there is underlying all these phenomena are processes of macromolecular shape change in response to a changing local environment. Ion channel proteins change shape in response to the electric field across the membrane, opening or closing pores; at the synapse shape-changing proteins respond to electrical changes to trigger the bursting open of synaptic vesicles to release the neurotransmitters, which themselves bind to protein receptors to transmit their signal, and complicated sequences of protein shape changes underlie the signalling networks that strengthen and weaken synaptic responses to make memory, remodelling the connections between neurons.

This emphasises that the fundamental unit of biological information processing is not the neuron or the synapse, its the molecule. Dennis Bray, in an important 1995 paper Protein molecules as computational elements in living cells, pointed out that a protein molecule can act as a logic gate through the process of allostery its catalytic activity is modified by the presence or absence of bound chemicals. In this chemical version of logic, the inputs are the presence or absence of certain small molecules, and the outputs are the molecules that the protein produces, in the presence of the right input chemicals, by catalysis. As these output chemicals can themselves be the inputs to other protein logic gates, complex computational networks linking the inputs and outputs of many different logic gates can be built up. The ultimate inputs of these circuits will be environmental cues the presence or absence of chemicals or other environmental triggers detected by molecular sensors at the surface of the cells. The ultimate outputs can be short-term to activate a molecular motor so that a cell swims towards a food source or away from a toxin. Or they can be long term, in activating and deactivating different genes so that the cell builds different structures for itself, or even changes the entire direction of its development.

This is how a single celled organism like an amoeba can exhibit behaviour that is in effect purposeful, that is adaptive to the clues it detects from the environment around it. All living cells process information this way. In the collective alliance of cells that makes up a multi-cellular organism like a human, all our cells have the ability to process information. The particular cells that specialise in doing information processing and long-ranged communication the neurons start out with the general capability for computation that all cells have, but through evolution have developed this capability to a higher degree and added to it some new tricks. The most important of these new tricks is an ability to control the flow of ions across a membrane in a way that modifies the membrane potential, allowing information to be carried over long distances by the passage of shock waves of membrane potential, and communications to be made between neurons in response to these rapid changes in membrane potential through the release of chemicals at synapses. But, as always happens in evolved systems, these are new tricks built on the old hardware and old design principles molecules whose shape changes in response to changes in their environment, this shape change producing functional effects (such as the opening of an ion channel in response to a change in membrane potential).

The molecular basis of biological information processing emphasises the limitations of the wiring metaphor. Determining the location and connectivity of individual neurons, or the connectome as its begun to be called in neuroscience is necessary, but far from sufficient condition for specifying the informational state of the brain; to do that completely requires us to know where the relevant molecules are, how many of them are present, and what state theyre in.

The brain, randomness, and quantum mechanics

The molecular basis of biological computation means that it isnt deterministic, its stochastic, its random. This randomness isnt an accidental add-on, its intrinsic to the way molecular information processing works. Any molecule in a warm, wet watery environment like the cell is constantly bombarded by its neighbouring water molecules, and this bombardment leads to the constant jiggling we call Brownian motion. But its exactly the same bombardment that drives the molecule to change shape when its environment changes. So if we simulate, at the molecular level, the key parts of the information processing system of the brain, like the ion channels or the synaptic vesicles, or the broader cell signalling mechanisms by which the neurons remodel themselves in response to the information they carry, we need to explicitly include that randomness.

I want to speculate here about what the implications are of this inherently random character of biological information processing. A great deal has been written about randomness, determinism and the possibility of free will, and Im largely going to avoid these tricky issues. I will make one important point, though. It seems to me that all the agonising about whether the idea of free will is compatible with a brain that operates through deterministic physics is completely misplaced, because the brain just doesnt operate through deterministic physics.

In a computer simulation, wed build in the randomness by calls to a pseudo-random number generator, as we compute the noise term in the Langevin equation that would describe, for example, the internal motions of an receptor protein docking with a neurotransmitter molecule. In the real world, the question we have to answer is whether this randomness is simply a reflection of our lack of knowledge? Does it simply arise from a decision we make not to keep track of every detail of each molecular motion in a very complex systems? Or is it real randomness, that is intrinsic to the fundamental physics, and in particular from the quantum mechanical character of reality? I think it is real randomness, whose origins can be traced back to quantum fluctuations.

To be clear, Im not claiming here that the brain is a quantum computer, in the sense that it exploits quantum coherence in the way suggested by Roger Penrose. It seems to me difficult to understand how sufficient coherence could be maintained in the warm and wet environment of the cell. Instead, I want to focus on the origin of the forces between atoms and molecules. Attractions between uncharged molecules arise from the van der Waals force, which is most fundamentally understood as a fluctuation force, a force that arises from the way randomly fluctuating fields are are modified by atoms and molecules. The fluctuating fields in question are the zero-point and thermal fluctuations of the electromagnetic field of the vacuum. Because the van der Waals force arises from quantum fluctuations, the force itself is fluctuating, and (see my earlier post Where the randomness comes from) these random fluctuations, of quantum origin, are sufficient to account for the randomness of the warm, wet nanoscale world.

The complexity theorist Scott Aaronson has recently written an interesting, but highly speculative essay that touches on these issues The Ghost in the Quantum Turing Machine (PDF). Aaronson argues that there is a type of unpredictability about the universe today that arises from the quantum unknowability of the initial conditions of the universe. He evokes the quantum no-cloning principle to argue that quantum state functions that have evolved unitarily, without decoherence, from the beginning of the universe he calls these freebits have a different character of uncertainty to the normal types of randomness we deal with using probability distributions. The question then is whether the fundamental unpredictability of freebits could be connected to some fundamental unpredictability of the decisions made by a human mind. Aaronson suggests it could, if there were a way in which the randomness inherent in the molecular processes underlying the operation of the brain such as the opening and closing of ion channels could be traced back to quantum uncertainty. My own suggestion is that the origin of van der Waals forces, as a fluctuation force, in the quantum fluctuations of the vacuum electromagnetic field, offers the connection that Aaronson is looking for.

If Aaronson is correct that his freebit picture shows how the fundamental unknowability of the quantum initial conditions of the universe translate into a fundamental unpredictability of certain physical processes now, and I am correct in my suggestion that the origins of the van der Waals force in the quantum fluctuations of fields provide a route through which such unpredictability translates into the outcomes of physical processes in the brain, then this provides an argument for mind uploading being impossible in principle. This is a conclusion I suggest only very tentatively.

Your mind will not be uploaded: dealing with it

But theres nothing tentative about my conclusion that if you are alive now, your mind will not be uploaded. What comforts does this leave for those fearing oblivion and the void, but reluctant to engage with the traditional consolations of religion and philosophy? Transhumanists have two cards left to play.

Cryonics offers the promise of putting your brain in a deep freeze to wait for technology to catch up with the challenges of uploading. Its clear that a piece of biological tissue that has formed a glass at -192 C will, if kept at that temperature, remain in that state indefinitely without significant molecular rearrangements. The question is how much information is lost in the interval between clinical death and achieving that uniform low temperature, as a consequence both of the inevitable return to equilibrium once living systems fail, and of the physical effects of rapid cooling. Physiological structures may survive, but as weve seen, its at the molecular level that the fundamentals of biological information processing take place, and current procedures will undoubtedly be highly perturbing at this level. All this leaves aside, of course, the sociological questions about why a future society, even if it has succeeded in overcoming the massive technical obstacles to characterising the brain at the molecular level, would wish to expend resources in reanimating the consciousnesses of the particular individuals who now choose this method of corporeal preservation.

The second possibility that appeals to transhumanists is that we are on the verge of a revolution in radical life extension. Its unquestionably true, of course, that improvements in public health, typical lifestyles and medical techniques have led to year-on-year increases in life expectancy, but this is driven mostly by reducing premature death. The increasingly prevalent diseases of old age particularly neurodegenerative diseases like Alzheimers seem as intractable as ever; we dont even have a firm understanding of their causes, let alone working therapies. While substantial fractions of our older people are suffering from cruel and incurable dementias, the idea of radical life extension seems to me to be a hollow joke.

Why should I worry about what transhumanists, or any else, believes in? As I began to discuss at the end of my last post, Transhumanism has never been modern, I dont think the consequences of transhumanist thinking are entirely benign, and Ill expand on that in a later post. But there is a very specific concern about science policy that I would like to conclude with. Radical ideas like mind uploading are not part of the scientific mainstream, but there is a danger that they can still end up distorting scientific priorities. Popular science books, TED talks and the like flirt around such ideas and give them currency, if not credibility, adding fuel to the Economy of Promises that influences and distorts the way resources are allocated between different scientific fields. Scientists doing computational neuroscience dont themselves have to claim that their work will lead to mind uploading to benefit from an environment in which such claims are entertained by people like Ray Kurzweil, with a wide readership and some technical credibility. I think computational neuroscience will lead to some fascinating new science, but you could certainly question the proportionality of the resource it will receive compared to, say, more experimental work to understand the causes of neurodegenerative diseases.

Here is the original post:

Your mind will not be uploaded Soft Machines

Brain Uploading – TV Tropes

"The point is, if we can store music on a compact disc, why can't we store a man's intelligence and personality on one? So, I have the engineers figuring that one out now."Artificial Intelligence is hard. Why reinvent the wheel, when you've got plenty of humans walking around? Who will miss one, right?Alternatively, you might be one of those humans looking for easy immortality. Either way, once you finish scanning the brain, you end up with a file that you run in a physics simulator, and presto, you have a computer that remembers being a human. If you do it carefully enough, the original brain won't even notice it happening.This computer has a number of advantages over a meat human. The simulation can be run many thousands of times faster than objective speed, if you've got enough computing power. It can be backed up with trivial ease. You can run multiple copies at the same time, and have them do different things, make exotic personality composites, and tinker around with the inner workings of the brain in ways that are either difficult or impossible to do with a meat brain. Additionally, there's the fact that it's impossible to kill as long as its data is backed up somewhere and there exists a computer on which to run it - you can just restart the simulation wherever you left off and the mind won't even recognize it.Critics of the concept are quick to point out that it presupposes an understanding of neurology (not just human neurology, but even the neurology of a common insect) far, far beyond what currently exists; and that without that knowledge, even the most powerful computer cannot do this. Proponents of the idea assure us that this knowledge is coming. Proponents who hope to live to see and actually benefit from it assure us that it's coming really really soon.As with The Singularity, the idea of brain uploading has inevitably taken on a quasi-religious aspect for many in recent years, since it does promise immortality of a sort (as long as your backups and the hardware to run them on are safe), and even transcendence of the body.The advantages bestowed by brain uploading are a bit overwhelming if you're trying to incorporate them into a story. It kind of kills the tension when the protagonist can restore from backup whenever the Big Bad kills them. Authors have devised a number of cop-outs, which you can recognize by asking these questions:

open/close all folders

Anime and Manga

Comic Books

Fan Works

He took out the hexagonal chip from his coat, the Soul Catcher that contained Shepard's memories, her mind, her skills... but ironically, not her soul.

Films Live-Action

Literature

Live-Action TV

The TARDIS: Do you really not recognize me? Just because they put me in here? The Doctor: They said you were dangerous. The TARDIS: Not the cage, stupid. (puts fingers on temples) In here. They put me in here!

Other Sites

Podcasts

Stand Up Comedy

Tabletop Games

Video Games

Visual Novels

Webcomics

Web Originals

Western Animation

Alternate Reality Games

Real Life

Follow this link:

Brain Uploading - TV Tropes

Mind uploading | Transhumanism Wiki | FANDOM powered by Wikia

In transhumanism and science fiction, mind uploading (also occasionally referred to by other terms such as mind transfer, whole brain emulation, or whole body emulation) refers to the hypothetical transfer of a human mind to a substrate different from a biological brain, such as a detailed computer simulation of an individual human brain.

The human brain contains a little more than 100 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. The brain contains cell types other than neurons (such as glial cells), some of which are structurally similar to neurons, but the information processing of the brain is thought to be conducted by the network of neurons.

Current biomedical and neuropsychological thinking is that the human mind is a product of the information processing of this neural network. To use an analogy from computer science, if the neural network of the brain can be thought of as hardware, then the human mind is the software running on it.

Mind uploading, then, is the act of copying or transferring this "software" from the hardware of the human brain to another processing environment, typically an artificially created one.

The concept of mind uploading then is strongly mechanist, relying on several assumptions about the nature of human consciousness and the philosophy of artificial intelligence. It assumes that strong AI machine intelligence is not only possible, but is indistinguishable from human intelligence, and denies the vitalist view of human life and consciousness.

Mind uploading is completely speculative at this point in time; no technology exists which can accomplish this.

The relationship between the human mind and the neural circuitry of the brain is currently poorly understood. Thus, most theoretical approaches to mind uploading are based on the idea of recreating or simulating the underlying neural network. This approach would theoretically eliminate the need to understand how such a system works if the component neurons and their connections can be simulated with enough accuracy.

It is unknown how precise the simulation of such a neural network would have to be to produce a functional simulation of the brain. It is possible, however, that simulating the functions of a human brain at the cellular level might be much more difficult than creating a human level artificial intelligence, which relied on recreating the functions of the human mind, rather than trying to simulate the underlying biological systems.[citation needed]

Thinkers with a strongly mechanistic view of human intelligence (such as Marvin Minsky) or a strongly positive view of robot-human social integration (such as Hans Moravec and Ray Kurzweil) have openly speculated about the possibility and desirability of this.

In the case where the mind is transferred into a computer, the subject would become a form of artificial intelligence, sometimes called an infomorph or "nomorph." In a case where it is transferred into an artificial body, to which its consciousness is confined, it would also become a robot. In either case it might claim ordinary human rights, certainly if the consciousness within was feeling (or was doing a good job of simulating) as if it were the donor.

Uploading consciousness into bodies created by robotic means is a goal of some in the artificial intelligence community. In the uploading scenario, the physical human brain does not move from its original body into a new robotic shell; rather, the consciousness is assumed to be recorded and/or transferred to a new robotic brain, which generates responses indistinguishable from the original organic brain.

The idea of uploading human consciousness in this manner raises many philosophical questions which people may find interesting or disturbing, such as matters of individuality and the soul. Vitalists would say that uploading was a priori impossible. Many people also wonder whether, if they were uploaded, it would be their sentience uploaded, or simply a copy.

Even if uploading is theoretically possible, there is currently no technology capable of recording or describing mind states in the way imagined, and no one knows how much computational power or storage would be needed to simulate the activity of the mind inside a computer. On the other hand, advocates of uploading have made various estimates of the amount of computing power that would be needed to simulate a human brain, and based on this a number have estimated that uploading may become possible within decades if trends such as Moore's Law continue.[citation needed]

If it is possible for human minds to be modeled and treated as software objects which can be instanced multiple times, in multiple processing environments, many potentially desirable possibilities open up for the individual.

If the mental processes of the human mind can be disassociated from its original biological body, it is no longer tied to the limits and lifespan of that body. In theory, a mind could be voluntarily copied or transferred from body to body indefinitely and therefore become immortal, or at least exercise conscious control of its lifespan.

Alternatively, if cybernetic implants could be used to monitor and record the structure of the human mind in real time then, should the body of the individual be killed, such implants could be used to later instance another working copy of that mind. It is also possible that periodic backups of the mind could be taken and stored external to the body and a copy of the mind instanced from this backup, should the body (and possibly the implants) be lost or damaged beyond recovery. In the latter case, any changes and experiences since the time of the last backup would be lost.

Such possibilities have been explored extensively in fiction: This Number Speaks, Nancy Farmer's The House of the Scorpion, Newton's Gate, John Varley's Eight Worlds series, Greg Egan's Permutation City, Diaspora, Schild's Ladder and Incandescence, the Revelation Space series, Peter Hamilton's Pandora's Star duology, Bart Kosko's Fuzzy Time, Armitage III series, the Takeshi Kovacs universe, Iain M. Banks Culture novels, Cory Doctorow's Down and Out in the Magic Kingdom, and the works of Charles Stross. And in television sci-fi shows: Battlestar Galactica, Stargate SG-1, among others.

Another concept explored in science fiction is the idea of more than one running "copy" of a human mind existing at once. Such copies could either be full copies, or limited subsets of the complete mentality designed for a particular limited functions. Such copies would allow an "individual" to experience many things at once, and later integrate the experiences of all copies into a central mentality at some point in the future, effectively allowing a single sentient being to "be many places at once" and "do many things at once".

The implications of such entities have been explored in science fiction. In his book Eon, Greg Bear uses the terms "partials" and "ghosts", while Charles Stross's novels Accelerando and Glasshouse deal with the concepts of "forked instances" of conscious beings as well as "backups".

In Charles Sheffield's Tomorrow and Tomorrow, the protagonist's consciousness is duplicated thousands of times electronically and sent out on probe ships and uploaded into bodies adapted to native environments of different planets. The copies are eventually reintegrated back into the "master" copy of the consciousness in order to consolidate their findings.

Such partial and complete copies of a sentient being again raise issues of identity and personhood: is a partial copy of sentient being itself sentient? What rights might such a being have? Since copies of a personality are having different experiences, are they not slowly diverging and becoming different entities? At what point do they become different entities?

If the body and the mind of the individual can be disassociated, then the individual is theoretically free to choose their own incarnation. They could reside within a completely human body, within a modified physical form, or within simulated realities. Individuals might change their incarnations many times during their existence, depending on their needs and desires.

Choices of the individuals in this matter could be restricted by the society they exist within, however. In the novel Eon by Greg Bear, individuals could incarnate physically (within "natural" biological humans, or within modified bodies) a limited number of times before being legally forced to reside with the "city memory" as infomorphic "ghosts".

Once an individual is moved to virtual simulation, the only input needed would be energy, which would be provided by large computing device hosting those minds. All the food, drink, moving, travel or any imaginable thing would just need energy to provide those computations.

Almost all scientists, thinkers and intelligent people would be moved to this virtual environment once they die. In this virtual environment, their brain capacity would be expanded by speed and storage of quantum computers. In virtual environment idea and final product are not different. This way more and more innovations will be sent to real world and it will speed up our technological development.

Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands of such venture are likely to be immense.

Henry Markram, lead researcher of the "Blue Brain Project", has stated that "it is not [their] goal to build an intelligent neural network", based solely on the computational demands such a project would have[1].

Advocates of mind uploading point to Moore's law to support the notion that the necessary computing power may become available within a few decades, though it would probably require advances beyond the integrated circuit technology which has dominated since the 1970s. Several new technologies have been proposed, and prototypes of some have been demonstrated, such as the optical neural network based on the silicon-photonic chip (harnessing special physical properties of Indium Phosphide) which Intel showed the world for the first time on September 18, 2006.[3] Other proposals include three-dimensional integrated circuits based on carbon nanotubes (researchers have already demonstrated individual logic gates built from carbon nanotubes[4]) and also perhaps the quantum computer, currently being worked on internationally as well as most famously by computer scientists and physicists at the IBM Almaden Research Center, which promises to be useful in simulating the behavior of quantum systems; such ability would enable protein structure prediction which could be critical to correct emulation of intracellular neural processes.

Present methods require use of massive computational power (as the BBP does with IBM's Blue Gene Supercomputer) to use the essentially classical computing architecture for serial deduction of the quantum mechanical processes involved in ab initio protein structure prediction. If necessary, should the quantum computer become a reality, its capacity for exactly such rapid calculations of quantum mechanical physics may well help the effort by reducing the required computational power per physical size and energy needs, as Markram warns would be needed (and thus why he thinks it would be difficult, besides unattractive) should an entire brain's simulation, let alone emulation (at both cellular and molecular levels) be feasibly attempted. Reiteration may also be useful for distributed simulation of a common, repeated function (e.g., proteins).

Ultimately, nano-computing is projected by some[citation needed] to hold the requisite capacity for computations per second estimated necessary, in surplus. If Kurzweil's Law of Accelerating Returns (a variation on Moore's Law) shows itself to be true, the rate of technological development should accelerate exponentially towards the technological singularity, heralded by the advent of viable though relatively primitive mind uploading and/or "strong" (human-level) AI technologies, his prediction being that the Singularity may occur around the year 2045.[5]

The structure of a neural network is also different from classical computing designs. Memory in a classical computer is generally stored in a two state design, or bit, although one of the two components is modified in dynamic RAM and some forms of flash memory can use more than two states under some circumstances. Gates inside central processing units will often also use this two state or digital type of design as well. In some ways a neural network or brain could be thought of like a memory unit in a computer, but with an extremely vast number of states, corresponding with the total number of neurons. Beyond that, whether the action potential of a neuron will form, based upon the summation of the inputs of different dendrites, might be something that is more analog in nature than that which happens in a computer. One great advantage that a modern computer has over a biological brain, however, is that the speed of each electronic operation in a computer is many orders of magnitude faster than the time scales involved for the firing and transmission of individual nerve impulses. A brain, however, uses far more parallel processing than exists in most classical computing designs, and so each of the slower neurons can make up for it by operating at the same time.

There are many ethical issues concerning mind uploading. Viable mind uploading technology might challenge the ideas of human immortality, property rights, capitalism, human intelligence, an afterlife, and the Abrahamic view of man as created in God's image. These challenges often cannot be distinguished from those raised by all technologies that extend human technological control over human bodies, e.g. organ transplant. Perhaps the best way to explore such issues is to discover principles applicable to current bioethics problems, and question what would be permissible if they were applied consistently to a future technology. This points back to the role of science fiction in exploring such problems, as powerfully demonstrated in the 20th century by such works as Brave New World and Nineteen Eighty-Four, each of which frame current ethical problems in a future environment where those have come to dominate the society.

Another issue with mind uploading is whether an uploaded mind is really the "same" sentience, or simply an exact copy with the same memories and personality. Although this difference would be undetectable to an external observer (and the upload itself would probably be unable to tell), it could mean that uploading a mind would actually kill it and replace it with a clone. Some people would be unwilling to upload themselves for this reason. If their sentience is deactivated even for a nanosecond, they assert, it is permanently wiped out. Some more gradual methods may avoid this problem by keeping the uploaded sentience functioning throughout the procedure.

True mind uploading remains speculative. The technology to perform such a feat is not currently available, however a number of possible mechanisms, and research approaches, have been proposed for developing mind uploading technology.

Since the function of the human mind, and how it might arise from the working of the brain's neural network, are poorly understood issues, many theoretical approaches to mind uploading rely on the idea of emulation. Rather than having to understand the functioning of the human mind, the structure of underlying neural network is captured and simulated with a computer system. The human mind then, theoretically, is generated by the simulated neural network in an identical fashion to it being generated by the biological neural network.

These approaches require only that we understand the nature of neurons and how their connections function, that we can simulate them well enough, that we have the computational power to run such large simulations, and that the state of the brain's neural network can be captured with enough fidelity to create an accurate simulation.

A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, thus capturing the structure of the neurons and their interconnections[6]. The exposed surface of frozen nerve tissue would be scanned (possibly with some variant of an electron microscope) and recorded, and then the surface layer of tissue removed (possibly with a conventional cryo-ultramicrotome if scanning along an axis, or possibly through laser ablation if scans are done radially "from the outside inwards"). While this would be a very slow and labor intensive process, research is currently underway to automate the collection and microscopy of serial sections[7]. The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.

There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique[7]. However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods which could then be read via confocal laser scanning microscopy[citation needed].

A more advanced hypothetical technique that would require nanotechnology might involve infiltrating the intact brain with a network of nanoscale machines to "read" the structure and activity of the brain in situ, much like the electrode meshes used in current brain-computer interface research, but on a much finer and more sophisticated scale. The data collected from these probes could then be used to build up a simulation of the neural network they were probing, and even check the behavior of the model against the behavior of the biological system in real time.

In his 1998 book, Mind children, Hans Moravec describes a variation of this process. In it, nanomachines are placed in the synapses of the outer layer of cells in the brain of a conscious living subject. The system then models the outer layer of cells and recreates the neural net processes in whatever simulation space is being used to house the uploaded consciousness of the subject. The nanomachines can then block the natural signals sent by the biological neurons, but send and receive signals to and from the simulated versions of the neurons. Which system is doing the processing biological or simulated can be toggled back and forth, both automatically by the scanning system and manually by the subject, until it has been established that the simulation's behavior matches that of the biological neurons and that the subjective mental experience of the subject is unchanged. Once this is the case, the outer layer of neurons can be removed and their function turned solely over to the simulated neurons. This process is then repeated, layer by layer, until the entire biological brain of the subject has been scanned, modeled, checked, and disassembled. When the process is completed, the nanomachines can be removed from the spinal column of the subject, and the mind of the subject exists solely within the simulated neural network.

Alternatively, such a process might allow for the replacement of living neurons with artificial neurons one by one while the subject is still conscious, providing a smooth transition from an organic to synthetic brain - potentially significant for those who worry about the loss of personal continuity that other uploading processes may entail. This method has been likened to upgrading the whole internet by replacing, one by one, each computer connected to it with similar computers using newer hardware.

While many people are more comfortable with the idea of the gradual replacement of their natural selves than they are with some of the more radical and discontinuous mental transfer, it still raises questions of identity. Is the individual preserved in this process, and if not, at what point does the individual cease to exist? If the original entity ceases to exist, what is the nature and identity of the individual created within the simulated neural network, or can any individual be said to exist there at all? This gradual replacement leads to a much more complicated and sophisticated version of the Ship of Theseus paradox.

It may also be possible to use advanced neuroimaging technology (such as Magnetoencephalography) to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. However, current imaging technology lacks the resolution needed to gather the information needed for such a scan.

Such a process would leave the original entity intact, but the existence, nature, and identity of the resulting being in the simulated network are still open philosophical questions.

Another recently conceived possibility[citation needed] is the use of genetically engineered viruses to attach to synaptic junctions, and then release energy-emitting molecular compounds, which could be detected externally, and used to generate a functional model of the synapses in question, and, given enough time, the whole brain and nervous system.

An alternate set of possible theoretical approaches to mind uploading would require that we first understand the functions of the human mind sufficiently well to create abstract models of parts, or the totality, of human mental processes. It would require that strong AI be not only a possibility, but that the techniques used to create a strong AI system could also be used to recreate a human type mentality.

Such approaches might be more desirable if the abstract models required less computational power to execute than the neural network simulation of the emulation techniques described above.

Another theoretically possible method of mind uploading from organic to inorganic medium, related to the idea described above of replacing neurons one at a time while consciousness remained intact, would be a much less precise but much more feasible (in terms of technology currently known to be physically possible) process of "cyborging". Once a given person's brain is mapped, it is replaced piece-by-piece with computer devices which perform the exact same function as the regions preceding them, after which the patient is allowed to regain consciousness and validate that there has not been some radical upheaval within his own subjective experience of reality. At this point, the patient's brain is immediately "re-mapped" and another piece is replaced, and so on in this fashion until, the patient exists on a purely hardware medium and can be safely extricated from the remaining organic body.

However, critics contend[citation needed] that, given the significant level of synergy involved throughout the neural plexus, alteration of any given cell that is functionally correspondent with (a) neighboring cell(s) may well result in an alteration of its electrical and chemical properties that would not have existed without interference, and so the true individual's signature is lost. Revokability of that disturbance may be possible with damage anticipation and correction (seeing the original by the particular damage rendered unto it, in reverse chronological fashion), although this would be easier in a stable system, meaning a brain subjected to cryosleep (which would imbue its own damage and alterations).[citation needed]

It has also been suggested (for example, in Greg Egan's "jewelhead" stories[8]) that a detailed examination of the brain itself may not be required, that the brain could be treated as a black box instead and effectively duplicated "for all practical purposes" by merely duplicating how it responds to specific external stimuli. This leads into even deeper philosophical questions of what the "self" is.

On June 6, 2005 IBM and the Swiss Federal Institute of Technology in Lausanne announced the launch of a project to build a complete simulation of the human brain, entitled the "Blue Brain Project".[9] The project will use a supercomputer based on IBM's Blue Gene design to map the entire electrical circuitry of the brain. The project seeks to research aspects of human cognition, and various psychiatric disorders caused by malfunctioning neurons, such as autism. Initial efforts are to focus on experimentally accurate, programmed characterization of a single neocortical column in the brain of a rat, as it is very similar to that of a human but at a smaller scale, then to expand to an entire neocortex (the alleged seat of higher intelligence) and eventually the human brain as a whole.

It is interesting to note that the Blue Brain project seems to use a combination of emulation and simulation techniques. The first stage of their program was to simulate a neocortical column at the molecular level. Now the program seems to be trying to create a simplified functional simulation of the neocortical column in order to simulate many of them, and to model their interactions.

With most projected mind uploading technology it is implicit that "copying" a consciousness could be as feasible as "moving" it, since these technologies generally involve simulating the human brain in a computer of some sort, and digital files such as computer programs can be copied precisely. It is also possible that the simulation could be created without the need to destroy the original brain, so that the computer-based consciousness would be a copy of the still-living biological person, although some proposed methods such as serial sectioning of the brain would necessarily be destructive. In both cases it is usually assumed that once the two versions are exposed to different sensory inputs, their experiences would begin to diverge, but all their memories up until the moment of the copying would remain the same.

By many definitions, both copies could be considered the "same person" as the single original consciousness before it was copied. At the same time, they can be considered distinct individuals once they begin to diverge, so the issue of which copy "inherits" what could be complicated. This problem is similar to that found when considering the possibility of teleportation, where in some proposed methods it is possible to copy (rather than only move) a mind or person. This is the classic philosophical issue of personal identity. The problem is made even more serious by the possibility of creating a potentially infinite number of initially identical copies of the original person, which would of course all exist simultaneously as distinct beings.

Philosopher John Locke published "An Essay Concerning Human Understanding" in 1689, in which he proposed the following criterion for personal identity: if you remember thinking something in the past, then you are the same person as he or she who did the thinking. Later philosophers raised various logical snarls, most of them caused by applying Boolean logic, the prevalent logic system at the time. It has been proposed that modern fuzzy logic can solve those problems,[10] showing that Locke's basic idea is sound if one treats personal identity as a continuous rather than discrete value.

In that case, when a mind is copied -- whether during mind uploading, or afterwards, or by some other means -- the two copies are initially two instances of the very same person, but over time, they will gradually become different people to an increasing degree.

The issue of copying vs moving is sometimes cited as a reason to think that destructive methods of mind uploading such as serial sectioning of the brain would actually destroy the consciousness of the original and the upload would itself be a mere "copy" of that consciousness. Whether one believes that the original consciousness of the brain would transfer to the upload, that the original consciousness would be destroyed, or that this is simply a matter of definition and the question has no single "objectively true" answer, is ultimately a philosophical question that depends on one's views of philosophy of mind.

Because of these philosophical questions about the survival of consciousness, there are some who would feel more comfortable about a method of uploading where the transfer is gradual, replacing the original brain with a new substrate over an extended period of time, during which the subject appears to be fully conscious (this can be seen as analogous to the natural biological replacement of molecules in our brains with new ones taken in from eating and breathing, which may lead to almost all the matter in our brains being replaced in as little as a few months[11]). As mentioned above, this would likely take place as a result of gradual cyborging, either nanoscopically or macroscopically, wherein the brain (the original copy) would slowly be replaced bit by bit with artificial parts that function in a near-identical manner, and assuming this was possible at all, the person would not necessarily notice any difference as more and more of their brain became artificial. A gradual transfer also brings up questions of identity similar to the classical Ship of Theseus paradox, although the above-mentioned natural replacement of molecules in the brain through eating and breathing brings up these questions as well.

A computer capable of simulating a person may require microelectromechanical systems (MEMS), or else perhaps optical or nano computing for comparable speed and reduced size and sophisticated telecommunication between the brain and body (whether it exists in virtual reality, artificially as an android, or cybernetically as in sync with a biological body through a transceiver), but would not seem to require molecular nanotechnology.

If minds and environments can be simulated, the Simulation Hypothesis posits that the reality we see may in fact be a computer simulation, and that this is actually the most likely possibility.[12]

Uploading is a common theme in science fiction. Some of the earlier instances of this theme were in the Roger Zelazny 1968 novel Lord of Light and in Frederik Pohl's 1955 short story "Tunnel Under the World." A near miss was Neil R. Jones' 1931 short story "The Jameson Satellite", wherein a person's organic brain was installed in a machine, and Olaf Stapledon's "Last and First Men" (1930) had organic human-like brains grown into an immobile machine.

Another of the "firsts" is the novel Detta r verkligheten (This is reality), 1968, by the renowned philosopher and logician Bertil Mrtensson, in which he describes people living in an uploaded state as a means to control overpopulation. The uploaded people believe that they are "alive", but in reality they are playing elaborate and advanced fantasy games. In a twist at the end, the author changes everything into one of the best "multiverse" ideas of science fiction. Together with the 1969 book Ubik by Philip K. Dick it takes the subject to its furthest point of all the early novels in the field.

Frederik Pohl's Gateway series (also known as the Heechee Saga) deals with a human being, Robinette Broadhead, who "dies" and, due to the efforts of his wife, a computer scientist, as well as the computer program Sigfrid von Shrink, is uploaded into the "64 Gigabit space" (now archaic, but Fred Pohl wrote Gateway in 1976). The Heechee Saga deals with the physical, social, sexual, recreational, and scientific nature of cyberspace before William Gibson's award-winning Neuromancer, and the interactions between cyberspace and "meatspace" commonly depicted in cyberpunk fiction. In Neuromancer, a hacking tool used by the main character is an artificial infomorph of a notorious cyber-criminal, Dixie Flatline. The infomorph only assists in exchange for the promise that he be deleted after the mission is complete.

In the 1982 novel Software, part of the Ware Tetralogy by Rudy Rucker, one of the main characters, Cobb Anderson, has his mind uploaded and his body replaced with an extremely human-like android body. The robots who persuade Anderson into doing this sell the process to him as a way to become immortal.

In the 1997 novel "Shade's Children" by Garth Nix, one of the main characters Shade (a.k.a. Robert Ingman) is an uploaded consciousness that guides the other characters through the post-apocolyptic world in which they live.

The fiction of Greg Egan has explored many of the philosophical, ethical, legal, and identity aspects of mind uploading, as well as the financial and computing aspects (i.e., hardware, software, processing power) of maintaining "copies". In Egan's Permutation City and Diaspora, "copies" are made by computer simulation of scanned brain physiology. Also, in Egan's "Jewelhead" stories, the mind is transferred from the organic brain to a small, immortal backup computer at the base of the skull, with the organic brain then being surgically removed.

The Takeshi Kovacs novels by Richard Morgan was set in a universe where mind transfers were a part of standard life. With the use of cortical stacks, which record a person's memories and personality into a device implanted in the spinal vertebrae, it was possible to copy the individual's mind to a storage system at the time of death. The stack could be uploaded to a virtual reality environment for interrogation, entertainment, or to pass the time for long distance travel. The stack could also be implanted into a new body or "sleeve" which may or may not have biomechanical, genetic, or chemical "upgrades" since the sleeve could be grown or manufactured. Interstellar travel is most often accomplished by digitized human freight ("dhf") over faster-than-light needlecast transmission.

In the "Requiem for Homo Sapiens" series of novels by David Zindell (Neverness, The Broken God, The Wild, and War in Heaven), the verb "cark" is used for uploading one's mind (and also for changing one's DNA). Carking is done for soul-preservation purposes by the members of the Architects church, and also for more sinister (or simply unknowable) purposes by the various "gods" that populate the galaxy such gods being human minds that have now grown into planet- or nebula-sized synthetic brains. The climax of the series centers around the struggle to prevent one character from creating a Universal Computer (under his control) that will incorporate all human minds (and indeed, the entire structure of the universe).

In the popular computer game Total Annihilation, the 4,000-year war that eventually culminated with the destruction of the Milky Way galaxy was started over the issue of mind transfer, with one group (the Arm) resisting another group (the Core) who were attempting to enforce a 100% conversion rate of humanity into machines, because machines are durable and modular, thereby making it a "public health measure."

In the popular science fiction show Stargate SG-1 the alien race who call themselves the Asgard rely solely on cloning and mind transferring to continue their existence. This was not a choice they made, but a result of the decay of the Asgard genome due to excessive cloning, which also caused the Asgard to lose their ability to reproduce. In the episode "Tin Man", SG-1 encounter Harlan, the last of a race that transferred their minds to robots in order to survive. SG-1 then discover that their minds have also been transferred to robot bodies. Eventually they learn that their minds were copied rather than uploaded and that the "original" SG-1 are still alive.

The Thirteenth Floor is a film made in 1999 directed by Josef Rusnak. In the film, a scientific team discovers a technology to create a fully functioning virtual world which they could experience by taking control of the bodies of simulated characters in the world, all of whom were self-aware. One plot twist was that if the virtual body a person had taken control of was killed in the simulation while they were controlling it, then the mind of the simulated character the body originally belonged to would take over the body of that person in the "real world".

The Matrix is a film released the same year as The Thirteenth Floor that has the same kind of solipsistic philosophy. In The Matrix, the protagonist Neo finds out that the world he has been living in is nothing but a simulated dreamworld. However, this should be considered as virtual reality rather than mind uploading, since Neo's physical brain still is required to reside his mind. The mind (the information content of the brain) is not copied into an emulated brain in a computer. Neo's physical brain is connected into the Matrix via a brain-machine interface. Only the rest of the physical body is simulated. Neo is disconnected from this dreamworld by human rebels fighting against AI-driven machines in what seems to be a neverending war. During the course of the movie, Neo and his friends are connected back into the Matrix dreamworld in order to fight the machine race.

In the series Battlestar Galactica the antagonists of the story are the Cylons, sentient computers created by man which developed to become nearly identical to human beings. When they die they rely on mind transferring to keep on living so that "death becomes a learning experience".

The 1995 movie Strange Days explores the idea of a technology capable of recording a conscious event. However, in this case, the mind itself is not uploaded into the device. The recorded event, which time frame is limited to that of the recording session, is frozen in time on a data disc much like today's audio and video. Wearing the "helmet" in playback mode, another person can experience the external stimuli interpretation of the brain, the memories, the feelings, the thoughts and the actions that the original person recorded from his/her life. During playback, the observer temporarily quits his own memories and state of consciousness (the real self). In other words, one can "live" a moment in the life of another person, and one can "live" the same moment of his/her life more than once. In the movie, a direct link to a remote helmet can also be established, allowing another person to experience a live event.

Followers of the Ralian religion advocate mind uploading in the process of human cloning to achieve eternal life. Living inside of a computer is also seen by followers as an eminent possibility.[13]

However, mind uploading is also advocated by a number of secular researchers in neuroscience and artificial intelligence, such as Marvin Minsky. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in Cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, Ph.D., who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.

Many Transhumanists look forward to the development and deployment of mind uploading technology, with many predicting that it will become possible within the 21st century due to technological trends such as Moore's Law. Many view it as the end phase of the Transhumanist project, which might be said to begin with the genetic engineering of biological humans, continue with the cybernetic enhancement of genetically engineered humans, and finally obtain with the replacement of all remaining biological aspects.

The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer.

Raymond Kurzweil, a prominent advocate of transhumanism and the likelihood of a technological singularity, has suggested that the easiest path to human-level artificial intelligence may lie in "reverse-engineering the human brain", which he usually uses to refer to the creation of a new intelligence based on the general "principles of operation" of the brain, but he also sometimes uses the term to refer to the notion of uploading individual human minds based on highly detailed scans and simulations. This idea is discussed on pp. 198-203 of his book The Singularity is Near, for example.

Hans Moravec describes and advocates mind uploading in both his 1988 book Mind Children: The Future of Robot and Human Intelligence and also his 2000 book Robot: Mere Machine to Transcendent Mind. Moravec is referred to by Marvin Minsky in Minsky's essay Will Robots Inherit the Earth?.[14]

fr:Tlchargement de l'espritja:ru:

View post:

Mind uploading | Transhumanism Wiki | FANDOM powered by Wikia

Michael Graziano on The Evolution of Consciousness and …

Biography: Michael Graziano (Wikipedia) is a scientist, novelist, and composer, and is currently aprofessor of Psychology and Neuroscience at Princeton University.His previous work focused on how the cortex monitors the space around the body and controls movement within that space, including groundbreaking research into the brains homunculus. His current research focuses on the biological basis of attention and consciousness. He has proposed the Attention Schema theory, an explanation of how, and for what adaptive advantage, brains attribute the property of awareness to themselves. His 2013 book, Consciousness and the Social Brain, explores this theory in-depth and extends it in novel and surprising ways.

Andy McKenzie:Your recent book,Consciousness and the Social Brain, describes and expands upon your fascinating and well-received model of consciousness. Interestingly, consciousness itself is perhaps too narrow as a description of the content in your book, since you also describe attention, and specifically how consciousness arises as useful adaptation for modeling ones attention processes and the attention processes of others. One thing Im particularly curious about this is: if we were to wind back the evolutionary clock, is there any other way that consciousness could have evolved? For example, if it were to have evolved in a highly cooperative species as opposed to one in which social games play such a prominent role, would the consciousness that developed be recognizable as such?

Michael Graziano: The evolutionary question is a good one. We suspect that awareness, in some form, is very evolutionarily old, and has its roots as far back as half a billion years ago. Different species may have different bells and whistles, different quirks or flavors, but almost every animal has either something like awareness or some very simple precursor algorithm from which our awareness emerged.

As you hinted in your question, the story starts with attention, this mechanistic ability to focus resources on a limited set of signals and process them in depth. Attention may have evolved very early, probably about half a billion years ago, as soon as animals had sophisticated nervous systems. That means insects, fish, mammals, birds, even octopuses, have some version of attention. And we think that as soon as attention appeared, evolution would have begun to construct an attention schema. The brain not only performs attention, but also builds an internal description of what its doing. This follows from everything we know about control engineering. If you want to control something, you need an internal description of it. This internal description of attention would have come in very early in evolution and then gradually become more elaborate. Its this internal description of attention, this attention schema, distorted and blurry, that tells us we have a non-physical essence inside us that allows us to mentally possess items and that empowers us to act on those items. Awareness is the internal model of attention.

So its not that some animals are conscious and others are not. Its much more of a graded thing. As humans, of course, we have our own peculiar human form of consciousness. We use it not only to understand ourselves, but also to understand others. One of the main human uses of consciousness is to attribute it to others; its foundational to our social intelligence.

I do think that if we had a different set of species properties, we would have a different flavor of consciousness. Just like different animals have different kinds of legs, adapted to their own needs, but we can recognize them all as legs.

In fact, given the complexity of wiring up a brain during infancy and childhood, I suspect that different people have slightly different consciousness constructs. What it means to be conscious is probably slightly different for different people. Thats a wild thought.

Andy McKenzie: In your Aeon article from a year and a half ago, you wrote:

> I find myself asking, given what we know about the brain, whether we really could upload someones mind to a computer. And my best guess is: yes, almost certainly.

You then go on to discuss some of the interesting and at time troubling social ramifications that this would entail. Do you still consider the prospect of mind uploading to be technically feasible more likely than not? And either way, what do you think is the strongest argument against the relatively near term (say, within 100-200 years) feasibility of mind uploading?

Michael Graziano: Yes, I think mind uploading is possible and even inevitable. The technology is moving that way, and there is way too much social motivation to stop that momentum. Just like Kofu wanted to imprint his memory on the world by building the largest of the great pyramids, and now some people put every detail of their lives online and that online presence lingers on like a ghost after the person is dead there will be a huge market for preserving so much of yourself that the trace left over actually thinks and feels and talks like you do, and has your memories, and believes it IS you. As strange and discombobulating as that seems, it is ultimately technically possible. I think it will be a gradual development. These preserved minds will be crude at first, not really fully naturalistic. More like caricatures of people. Within fifty years, Id say it will be technically possible to do a first crude pass at it, and someone will try it on a mouse or a frog or something. Its a matter of gradual refinement after that, until the caricature becomes a duplicate. It all depends on the progress in scanning technology. If we develop a non-invasive scan, like an MRI, that can get down to the microscopic details of individual neurons and their synaptic connections, then were set.

One of the strangest quirks of the mind-uploading mythos is the notion that if you upload yourself into a computer, your real self in the real world disappears. And you have to get yourself back out of the computer to return to the real world. This wonderful bit of fantasy is total nonsense and was invented to solve a narrative problem in story telling. If you copied your mind and uploaded it onto a computer, thered be two of you, one in the real world and one in the computer world, living through separate experiences. And the one in the computer world could in principle be copied any number of times, until there are millions of you. And some of those versions of you could be directly linked to other uploaded minds, with direct access to each others thoughts. This is very hard for people to wrap their minds around. It challenges our understanding of individuality. This is the main philosophical challenge of our future, it seems to me; the breakdown of the concept of individuality.

Andy McKenzie:You mentioned a non-invasive scan of microscopic details of individual neurons and their synaptic connections as a step towards mind uploading. Obviously this is somewhat speculative at this point, but Im curious: what do you think will be the level of scanning resolution detail required to produce an uploaded mind that would identify as being the same as the original mind?

Michael Graziano: To produce the first crude approximation to an uploaded mind, wed need a scan at a resolution that gives us the very thin processes or wires sprouting from neurons, and the synapses between neurons. That would be at the sub micron level. Maybe 100 nanometers. Thats very small. Current MRI technology, at the highest resolution typically used on the brain, can resolve physical details at about half a millimeter at the best. There are scanning techniques that can do much better, but right now are limited in various ways, for example to scanning a small piece of tissue. So a lot of development is needed. On the other hand, that development is going on rather aggressively, and there is no reason to think there is any fundamental technical limit in sight.

Nobody knows how refined a scan would be need to be, to duplicate all the nuances. It could be that a much more refined method, down to the molecular level, is needed. Nobody will know until people start to try these things out.

Andy McKenzie:What are you working on now?

Michael Graziano:My lab continues to study how consciousness is implemented in the brain. We do experiments on people, for example in the MRI scanner, to test and refine the Attention Schema theory of the biological basis of awareness.

Andy McKenzie: Thanks, Professor Graziano!

View original post here:

Michael Graziano on The Evolution of Consciousness and ...