10 Best Hollywood Spy Thriller movies available in Hindi – JanBharat Times

If you have an interest in Intelligent agencies around the world you must have heard the name Eli Cohen, he was a Mossads(Israels secret agency) agent, the guy had almost fetched the top rank post in the neighbor countrys government (Syria) through his intellect and brainpower.

He would send the intelligence to Israel by letters, radio. One of his famous achievements is the intelligence gathering of Syrian fortifications which many believe helped Israel to win the famous 6-day war. There is a Web Series available on the giant streaming platform Netflix based on the life of Eli Cohen.

Here we are updating some of the best espionage Hollywood spy movies that are based on real-life spies like Cohen, Although there are many amazing spy thriller movies/shows that have been made over the years like The Americans, Operation Finale, Fauda, etc, sadly they are not available in their dubbed versions. We are going to talk about those best spy movies that are available in their dubbed version.

Related

10 Best Spy Thriller Movies in Bollywood

Upcoming Bollywood Movies 2022-23

Steven Speilbergs directorial, this amazing action spy thriller is based on real-life events, The film tells the story of the Vengeance of Munich Massacre in which Israels Olympic players were killed by the terrorists during the 1972 summer Olympics.

How Mossad agents succeeds to kill all those who were involved in the incidents, the film narrates the story of Operation Wrath Of God, it has an IMDb rating of 7.5 out of 10. It stars Eric Bana and James Bond fame Daniel Craig in the pivotal roles. It is available on the streaming Services Netflix & Amazon Prime Video.

It is a biographical spy thriller based on the life of Edward Snowden a C. I consultant, who would work for the CIA(American Spy Agency) but later in 2013 leaked N.S.As (National Security Agency) highly classified files and smuggled out of the country. The IMDb rating of the show is 7.3 out of 10. It stars Joseph Gordon-Levitt, Shailene Woodley, and Melissa Leo in the Pivotal roles. Available on Netflix.

Ben Afflecks directorial Argo revolves around, rescuing Americans that were hostages in Tehran, Iran, during the Iran hostage crisis(1979-81). It stars Ben, Bryan Cranston, Alan Arkin, John Goodman, and Tate Donovan in significant roles.

It was made on a budget of 4.5 crore U.S dollars and grossed the whopping 23 crores U.S dollars at the box office worldwide. This must-watch film is critically acclaimed and has an IMDb rating of 7.7/10. Available on the giant streaming platform Amazon Prime.

This historical Spy drama film Bridge of Spies is available on the Sony Liv app and has an IMDb rating of 7.6 out of 10. It revolves around The cold war between the U.S and the Soviet Union.

It was released in Oct 2015 and did an overall business of 16.5 crores U.S dollars at the box office worldwide. Another masterpiece from the director Steven Speilberg, It stars the veteran actor Tom Hanks in the pivotal role, other actors includes Mark Rylance, Scott Shepherd, Amy Ryan & Sebastian Coch.

Bruce Willis starrer action comedy Red revolves around a former black operation agent who is all geared up to capture an executioner who has vowed to kill Willis. IMDb rating is 7.7 out of 10 and available on the leading streaming platform Amazon Prime Video. Helmed by Robert Schwentke, Cinematography and Editing have been carried out by Florian Ballhaus and Thom Noble.

Charlie Wilsons War is based on the CIAs mission Operation Cyclone to arm and support the Afghan Mujahidin during Cold War 1979-89, Directed by Mike Nicholas, it has an IMDb rating of 7 out of 10 and is available on the leading streaming Platform Netflix and on youTube.

It has an ensemble star casts veteran actor Tom Hanks, famous Hollywood actress Julia Roberts, Amy Adams, Emily Blunt, and veteran Bollywood actor Om Puri.

It is an American action spy thriller film directed by Kenneth Branagh. Jack Ryan: Shadow Recruit is based on a real-life story. It stars Chris Pine, Keira Knightley, Kevin Costner, and director Kenneth in pivotal roles. You will find it on the streaming platform Netflix. The IMDb rating is 6.2 out of 10. Harish Zambarloukos and Martin Walsh were the cinematographers and the editor respectively.

George Clooney and Brad Pitt starrer this film is a critically acclaimed film. It was released on Oct 2008, was made on a budget of 3.7 crore U.S dollars, and grossed an overall 16.3 crores U.S dollars at the box office worldwide. Joel Coen and Ethan Coen helmed, wrote, Edit and produced the film. It is a black comedy spy crime film that has an IMDb rating of 7 out of 10 and is available on Amazon Prime Video.

Rogue Nation was the fifth installment in the hit franchise Mission Impossible, Where the IMF (Impossible Mission Force) Agent is hiding from the American intelligence force CIA on the account of IMFs dissolution and now the agent has to prove the existence of the mysterious group, a syndicate that consists of former espionage officers of several countries. It is available on Netflix and has an IMDb rating of 7.4/10.

On the no. 10 spot we have the Oscar award-winning film The Hunt For Red October, It revolves around the story of a CIA analyst who has got intelligence on a Soviets naval captain who was trying to defect to the U.S.

Now the analyst must prove his theory before the U.S Navel to stop the violent confrontation between the Soviet and U.S Navies. The film is set in the time of the Cold War and has an IMDb rating of 7.6/10. Available on Netflix.

The rest is here:
10 Best Hollywood Spy Thriller movies available in Hindi - JanBharat Times

Cyber Challenges for the New National Defense Strategy – War on the Rocks

A major moment for Americas approach for cyberspace might be just around the corner. Its hard to make a new national defense strategy an exciting watershed, especially when a curious and ill-defined term integrated deterrence is at the center of it. But skeptics should be a little more open to the idea that the Pentagon is on the verge of pushing out a key idea that could solve many of its struggles in cyberspace. According to defense officials, integrated deterrence includes incorporating military capabilities across domains, theaters, and phases of conflict; rebuilding alliances; and fostering innovation and technological development, all with an eye towards creating a more resilient military. This list sounds good in theory. But, gauging from some expert reactions so far, its not clear what successful integration (or deterrence) would look like in practice.

Recently, Assistant Secretary of Defense Mara Karlinemphasized that the Pentagon is stress-testing ideasso that everybody knows what were talking about. In the spirit of this stress test and, since the Defense Department has a well-known track record with vague deterrence strategies and neologisms that seem designed to justify defense budgets, below we conduct our own stress test for cyber and the new strategy.

What does integration look like for cyberspace? What will the strategy have to overcome in order to be successful? Is deterrence the right frame for strategic success, or should the new strategy focus more squarely on resilience? The answers to these questions can help guide the Department of Defense as they make the final tweaks to their new strategy and, hopefully, make the United States more successful not just in cyberspace but across domains.

How Would Integrated Deterrence Actually Integrate Cyber?

Cyberspace is an important component of the Defense Departments integrated deterrence efforts. As Secretary of Defense Lloyd Austin noted in his remarks, this new strategic approach involves integrating our efforts across domains and across the spectrum of conflict as well as the elimination of stovepipes between services and their capabilities, and coordinated operations on land, in the air, on the sea, in space and in cyberspace.

This idea makes inherent sense. It is also consistent with research that has found cyber operations have limited utility as independent instruments of coercion, are rarely decisive in conflicts, and are generally poor signals of resolve for deterrence. Instead, cyber operations are more effective when they augment other military and foreign policy tools. This could include through deception and espionage, manipulating the information environment and decision-making, and potentially shaping or complementing conventional operations on the battlefield.

So, integrating cyber operations across theaters, domains, andphases of conflict is a good thing. Why does the Department of Defense need a new concept to do this? Cyber operations have been difficult to incorporate into the normal defense planning process. This process, a highly formulaic procedure (usually focused on a single theater) of allotting troops and weapons by phases of conflict, is unwieldy for cyberspace operations. This is because cyber operations struggle with assured access, good estimates of effectiveness or extent of damage, or even certainty about for how long they will work (or even if they will work as intended). Though using a cyber operation, for instance, to blind air defenses before an airstrike sounds good on paper, in practice mission commanders would rather rely on cruise missiles or electronic jamming that can meet time on target needs and have better estimates of effectiveness than cyber operations. Further, cyber accesses for conventional conflicts (for instance, access to an adversarys weapons networks or military command systems) are difficult to obtain and retain, meaning that cyber capabilities rarely sit on a shelf for an extended period, available to use at a whim when an operational plan is executed. That said, substituting cyber for conventional capabilities comes with some unique benefits, such as the temporary and reversible nature of the damage inflicted and the ability to operate in a more deniable fashion. Discerning how to capitalize on these aspects of cyber capabilities while addressing their limitations represents a central challenge for planners.

For years, the solution was to invest in systems (like Cyber Commands Unified Platform) that were supposed to provide greater certainty about cyber effects. However, these efforts have struggled to create certainty in a domain where uncertainty is a fixture, not a temporary defect. Perhaps, therefore, a better approach is to instead assimilate other domains and capabilities into processes in which cyber operations have been innovative and successful. In particular, event-based task forces increasingly used for cyber events (such as Joint Task Force-Ares or the inter-agency task force to combat election interference) provide an alternative planning mechanism that is dynamic, works across government agencies, and fits nicely within the infamous phase 0 of competition where most gray zone operations take place (and the joint planning process is notoriously unsatisfying).

Commanders also need to think about cyber effects in conflict as more than just replacements for things they could otherwise do with conventional capabilities. Cyber operations are at their best not when they are designed to create an effect in a moment in time, but instead when they are part of a larger strategy of obfuscation, deception, and sabotage. These can be extremely useful complements to conventional missions but how they are targeted, tasked, and executed will likely not fit best within the tasking order cycle or even in service silos that disproportionately focus on single platforms versus network effects.

Finally, planning and process integration will ultimately fail if the Defense Department does not make good on innovation. Currently, the program of record and acquisition process makes acquiring cyber capabilities (especially on the defensive side where commercial software solutions far outpace the Department of Defense) extremely difficult. Software, unlike most defense acquisition widgets, requires constant development, patching, and updating all tasks the current acquisition process is not designed to accommodate. Even worse is the Pentagons record of investing in software through research or small businesses and getting it across the valley of death and implemented on its own networks. Further, the lack of information technology integration between the armed services means that networks, software, even data are owned and more often than not administered separately by each service. This is a nightmare for acquiring cyber capabilities whether defensive or offensive and large enterprise-wide solutions (even from Cyber Command) are almost impossible to implement without an advocate from one of the armed services spearheading the effort.

Challenges (and Opportunities) of Alliances

Integrated deterrence goes beyond what is already a very difficult challenge of making cyberspace work better within the U.S. military. Alliances also seem to play a huge role in the Department of Defenses new deterrence concept. As Undersecretary of Defense Colin Kahl explained, the new strategy requires that the Department of Defense be integrated across our allies and partners, which are the real asymmetric advantage that the United States has over any other competitor or potential adversary.

Cyberspace presents a unique challenge for alliances. For years, Washingtons traditional alliance relationships struggled to even agree on basic cyber terms and attempts to share information were complicated by cyber operations close relationship with the highly classified world of signals intelligence. Moreover, U.S. actions in cyberspace have, in some cases, strained alliance relationships. Two prominent examples include the backlash over the Edward Snowden leaks as well as concerns about the implications of persistent engagement and defend forward for allied-owned networks.

These were considerable challenges.However, as cyber incidents have escalated over the last few years, there has also been an increasing recognition across these relationships that cyberspace matters.This joint recognition spurred new information-sharing mechanisms and partner efforts to find and root out adversary infiltration attempts on allied networks. Most recently, joint attribution by NATO and E.U. partners called out China for the Microsoft Exchange Hack a rare reaction from these organizations. This comes on the heels of public statements at the NATO summit in Geneva in June that reaffirmed the applicability of the mutual defense clause of the alliance agreement to cyberspace. Further, despite the aforementioned alliance tensions, the Defense Department has conducted 24 hunt forward operations in which U.S. cyber protection teams partnered with 14 countries to root out adversary activity on allied networks.

Building on this forward momentum, perhaps the greatest opportunity for the Biden administrations national defense strategy is to use military alliances and partnerships to facilitate norm development. Norms are shared understandings about appropriate behavior. Some norms are written down and formalized in agreements, while others are more informal and emerge as a result of state practice over time. Moreover, norms are agnostic with respect to morality: there could be good norms that facilitate cooperation, but also bad norms that make the international system less stable.

In the past, particularly under the Obama administration, norms were considered the realm of the State Department while the Department of Defense focused on deterrence by punishment and denial. This changed under the Trump administration, when the State Departments norms efforts took a back seat to Department of Defense efforts to defend forward.The initial foundational work done by the Obama administration on cyber norms, paired with four years of experimentation and more risk-acceptant cyber authorities under the Trump administration, have created a track record for cyber norms that is far more heterogeneous than policymakers have let on. While there are certainly many areas where states disagree, norms do exist in cyberspace. For instance, a diverse set of states beyond just the United States and like minded nations has come to formal agreements about rules of the road for cyberspace through various international institution-driven processes, most notably the United Nations Group of Governmental Experts and the Open-Ended Working Group. To the surprise of many observers, earlier this year both of these processes resulted in consensus reports where parties agreed to a set of cyber norms. And from a bilateral perspective, rivals such as Russia have been willing to engage the United States in discussions about cyber norms, even if the prospects for cooperation remain uncertain. And beyond formalized agreements, there is a range of unwritten, implied norms that shape mutual expectations of behavior in cyberspace. These include a firebreak between cyber and conventional operations, such that states to not respond to cyber attacks with the use of kinetic military force; the idea that cyber espionage is generally treated as other forms of espionage (with some exceptions); and a pattern of tit-for-tat responses in cyberspace that have led to a nascent sense of what counts as proportional.

The Defense Department plays a large role in this process though in the past this hasnt been a formal effort. Specifically, how the Department of Defense uses its own cyber capabilities or threatens to respond to cyber capabilities can play an outsized role in whether cyberspace norms proliferate. Some have argued that employing military cyber power can, through a tacit process, contribute to the development of cyber norms. However, the ambiguous signaling strategies that this line of argument generates are often overly complicated and obtuse. Strategic documents are some of the clearest articulations of norms that adversaries receive. Given that, the U.S. military should use the opportunity of a new national defense strategy to voice clearly what the U.S. believes are appropriate norms of behavior in cyberspace. In particular, it should consider making unambiguous statements about what the Pentagon wont do in cyberspace in effect, a declaratory policy of restraint. This may be as important to norm propagation as efforts by the State Department to codify international agreements.

Are the Assumptions Correct?

We have previewed what integrated deterrence might look like in practice and how difficult it can be to actually integrate.Knowing whether deterrence can work is even more difficult.For cyber, we are concerned that previews of cyber deterrence assumptions rest on shaky assumptions. In particular, Austins remarks about the strategic environment in cyberspace suggest some faulty assumptions about escalation and deterrence in cyberspace. Austin described cyberspace as a domain in which norms of behavior arent well established and the risks of escalation and miscalculation are high. Implied in this statement is a link between the former and the latter in other words, one of the reasons cyberspace may be a dangerous domain is due to the purported absence of meaningful norms of behavior. However, this is problematic for two reasons.

First, (as we alluded to before) cyberspace is not an ungoverned Wild West bereft of norms. When U.S. policymakers lament the absence of norms in cyberspace (or in other domains), they almost always mean the lack of norms that the United States perceives to be in its own interests or consistent with its values but this does not mean that norms do not exist.

Second, despite fears among scholars and practitioners, there is little empirical support for the notion that cyberspace is a uniquely escalatory domain (or that cyber operations are effective signals for cross-domain deterrence). Academics have systematically explored this question through deductive analysis, wargames, and statistical analysis and rarely find evidence of escalation from cyberspace to violence. The reality is that escalation in cyberspace is neither rampant, nor wholly impossible thats because escalation is an inherently political phenomenon driven by the perceptions and risk calculations of adversarial actors. Therefore, sweeping pronouncements about cyber escalation do little to aid policymakers in developing reasonable assessments of escalation risks (and may actually handcuff otherwise useful below-violent options for decision-makers).

Assumptions matter because they guide strategy development and implementation, even if not explicitly. Therefore, reexamining long-held but erroneous understandings of the nature of strategic competition in cyberspace can provide a stronger basis for discerning how to incorporate cyber operations into defense strategy. Specifically, policymakers should set aside truisms about cyber escalation and instead focus on more granular discussions about a set of plausible scenarios that could give rise to different forms of escalation risks, and the mitigation strategies that follow from them.

Looking Ahead: Resilience!

Finally, Austins speech hints at what we see as a compelling opportunity to reimagine cyber strategy in a resilience context, potentially making progress in an environment of seemingly intractable debates among policymakers about the feasibility of cyber deterrence. The main difference between strategies of resilience versus other strategies that focus on deterrence or even defense is that resilience is about perseverance over time while responding to disruptive attacks. Whereas deterrence fails when states attack, resilience assumes that states will attack but instead predicates success on the ability to absorb these attacks and recoup, retrench, and conduct sustained campaigns. One of the limitations of previous cyber strategy has been the caging of ideas like persistent engagement in offensive or defensive language. Instead, the value of persistence is in resilience and survival.

What might a resilient cyber strategy look like? While a comprehensive take is beyond the scope of this article indeed, it represents a significant research agenda in its own right we offer a few initial suggestions for policymakers to consider. First, it would require the joint force to identify the critical functions and processes that are essential for core missions. Second, it would incentivize (and punish) the services for creating highly centralized or exquisite and fragile networks and platforms recognizing that cyber security is less likely to succeed when these types of capabilities are built. Third, it would require the services to build manual workarounds and back-up solutions to limit adversary impact to critical systems and functions and to prioritize recovery efforts. Finally, a cyber strategy based on resiliency would measure success not by how many attacks occur but instead by the effects of cyber attacks on Americas ability to conduct operations across domains and achieve key military objectives. Together, these initiatives towards resilience would both require and create a more integrated force.

Erica Lonergan, Ph.D., is an assistant professor in the Army Cyber Institute and a research scholar in the Saltzman Institute of War and Peace Studies at Columbia University. The views expressed in this article are personal and do not reflect the policy or position of any U.S. government organization or entity. Follow her on Twitter @eborghard.

Jacquelyn Schneider, Ph.D., is a Hoover Fellow at Stanford University and an affiliate at Stanfords Center for International Security and Cooperation. Follow her on Twitter @jackiegschneid.

Image: U.S. Space Force (Photo by Senior Airman Andrew Garavito)

Continued here:
Cyber Challenges for the New National Defense Strategy - War on the Rocks

Cybersecurity: Increase your protection by using the open-source tool YARA – TechRepublic

YARA won't replace antivirus software, but it can help you detect problems much more efficiently and allows more customization. Here's how to install YARA on Mac, Windows and Linux.

Image: djedzura/ iStock

A plethora of different tools exist to detect threats to the corporate network. Some of these detections are based on network signatures, while some others are based on files or behavior on the endpoints or on the servers of the company. Most of these solutions use existing rules to detect danger, which hopefully are updated often. But what happens when the security staff wants to add custom rules for detection or do their own incident response on endpoints using specific rules? This is where YARA comes into play.

YARA is a free and open-source tool aimed at helping security staff detect and classify malware, but it should not be limited to this single purpose. YARA rules can also help detect specific files or whatever content you might want to detect.

SEE: 40+ open source and Linux terms you need to know (TechRepublic Premium)

YARA comes as a binary that can be launched against files, taking YARA rules as arguments. It works on Windows, Linux and Mac operating systems. It can also be used in Python scripts using the YARA-python extension.

YARA rules are text files that contain items and conditions that trigger a detection when met. These rules can be launched against a single file, a folder containing several files or even a full file system.

Here are a few ways you can use YARA.

The main use of YARA, and the one it was initially created for in 2008, is to detect malware. You need to understand it does not work as a traditional antivirus software. While the latter mostly detects static signatures of a few bytes in binary files or suspicious file behavior, YARA can enlarge detection by using specific components combinations. Therefore, it is possible to create YARA rules to detect whole families of malware and not just a single variant. The ability to use logical conditions to match a rule makes it a very flexible tool for detecting malicious files.

Also, it should be noted that in this context it is also possible to use YARA rules not only on files but also on memory dumps.

During incidents, security and threat analysts sometimes need to quickly examine if one particular file or content is hidden somewhere on an endpoint or even on all the corporate network. One solution to detect a file no matter where it is located can be to build and use specific YARA rules.

The use of YARA rules can make a real file triage when needed. Classification of malware by family can be optimized using YARA rules. Yet rules need to be very precise to avoid false positives.

It is possible to use YARA in a network context, to detect malicious content that is sent to the corporate network to protect. YARA rules can be launched on e-mails and especially on their attached files, or on other parts of the network, like HTTP communications on a reverse proxy server, for example. Of course, it can be used as an addition to already existing analysis software.

SEE: Linux turns 30: Celebrating the open source operating system (free PDF) (TechRepublic)

Outgoing communication can be analyzed using YARA rules to detect outgoing malware communications but also to try to detect data exfiltration. Using specific YARA rules based on custom rules made to detect legitimate documents from the company might work as a data loss prevention system and detect a possible leak of internal data.

YARA is a mature product and therefore several different EDR (Endpoint Detection and Response) solutions allow personal YARA rules to be integrated into it, making it easier to run detections on all the endpoints with a single click.

YARA is available for different operating systems: macOS, Windows, and Linux.

YARA can be installed on macOS using Homebrew. Simply type and execute the command:

After this operation, YARA is ready for use in the command line.

YARA offers Windows binaries for easy use. Once the zip file is downloaded from the website, it can be unzipped in any folder and contains two files: Yara64.exe and Yarac64.exe (or Yara32.exe and Yarac32.exe, if you chose the 32-bit version of the files).

It is then ready to work on the command line.

YARA can be installed directly from its source code. Download it here by clicking on the source code (tar.gz) link, then extract the files and compile it. As an example we'll use version 4.1.3 of YARA, the latest version at the time of this writing, on an Ubuntu system.

Please note that a few packages are mandatory and should be installed prior to installing YARA:

Once done, run the extraction of the files and the installation:

YARA is easy to installthe most difficult part is learning how to write efficient YARA rules, which I'll explain in my next article.

Disclosure:I work for Trend Micro, but the views expressed in this article are mine.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

See the article here:

Cybersecurity: Increase your protection by using the open-source tool YARA - TechRepublic

HashiCorp’s IPO will place it among the most richly valued open source tech companies – TechCrunch

The HashiCorp IPO intends to shoot the narrows between Thanksgiving and Christmas, with its first IPO pricing interval set to give it among the richest valuations of any technology company with a strong open source component to its core business.

The Exchange explores startups, markets and money.

Read it every morning on TechCrunch+ or get The Exchange newsletter every Saturday.

In a recent S-1/A filing, the cloud infra management company indicated that it expects to sell shares in its public offering at a range of $68 to $72 apiece. That interval could move, of course, before the company prices. Nubank, for example, reduced its IPO price range this week ahead of its anticipated debut.

At the upper end of HashiCorps price range, using a fully diluted share count, the former startup will land among the most richly valued tech companies in the world that sport a reliance on open source code. The companys debut, then, will put points on the board for more than just itself when it does trade. (For more on the companys economics, head here.)

Lets talk about HashiCorps IPO valuation range, as well as how it stacks up to other public tech companies with robust revenue multiples.

HashiCorps IPO valuation at its current range can be calculated in one of two ways. The first employs a simple share count, or the number of shares that are currently anticipated to be outstanding after its debut. The second is a fully diluted share count, which includes shares that have been earned through options but not yet turned from pledges into shares.

The company expects to have 178,895,570 shares of Class A and B stock in circulation after its IPO. HashiCorps simple IPO share count rises to 181,190,570 if we count shares reserved for its underwriting entities.

Using the latter figure, at a $68 to $72 per-share IPO price interval, HashiCorp would be worth between $12.3 billion and $13.0 billion.

However, on a fully diluted basis, the companys value is much higher. Per Renaissance Capital, at $70 per share, HashiCorps IPO, inclusive of a broader share count, would value it at $14.2 billion. Converting that to $72 per share, the company could be worth as much as $14.6 billion.

The unicorn was last valued at around $5 billion in March 2020, meaning its IPO pricing looks set to be a win.

Original post:

HashiCorp's IPO will place it among the most richly valued open source tech companies - TechCrunch

Why your external monitor looks awful on Arm-based Macs, the open source fix and the guy who wrote it – The Register

Interview Folks who use Apple Silicon-powered Macs with some third-party monitors are disappointed with the results: text and icons can appear too tiny or blurry, or the available resolutions are lower than what the displays are capable of.

It took an open source programmer working in his spare time to come up with a workaround that doesn't involve purchasing a hardware dongle to fix what is a macOS limitation.

Istvn Tth lives in Hungary, and called his fix BetterDummy. It works by creating a virtual display in software and then mirroring that virtual display to the real one, to coax macOS into playing ball. The latest version, 1.0.12, was released just a few days ago, and the code is free and MIT licensed.

One issue arises when you plug certain sub-4K third-party monitors into your M1 Mac. This includes QHD monitors with a resolution of 2560x1440. The operating system either displays the desktop at the native resolution of the monitor in which case text and user-interface widgets appear too small or offers an unusable blurry magnified version.

The blurring is because macOS isn't enabling its Retina-branded high-pixel-density mode called HiDPI, which would result in crisp font and user-interface rendering. For instance, if you have an M1 Mac connected to an external monitor with a native resolution of 2560x1440, and you try to run it at 1280x720 to make it easier to read, even though you satisfy the pixel density requirements of HiDPI, you still get a scaled blurry mess and not a crisp HiDPI view because macOS won't enable its Retina mode.

On top of this, M1 Macs may offer resolutions lower than what an external third-party monitor is actually capable of, with no way for users to add more options or fine-tune them. For example, you might find that your 5120x2160 ultra-wide monitor is only offered a maximum of 3440x1440.

There are tonnes of complaints about this from users on support boards and forums; even a petition for people to sign to get Cupertino's attention. We asked Apple if it planned to address these shortcomings in macOS, and its spokespeople were not available for comment.

Tth reckons the reason for much of this is that the Arm-based Macs use graphics driver code based on iOS and iPad OS, which do not need to support that many displays and certainly not any they can't understand. Macs with x86 processors, meanwhile, can enable HiDPI on sub-4K displays as well as allow the user to configure the available resolutions.

Enter BetterDummy an app that tricks macOS into thinking an actual 4K display is connected so that HiDPI rendering is enabled and works. It also allows people to create and tune their own resolutions if they're not available from the operating system.

Nobody can explain it better than the guy behind the code. So we decided to chat with him so he can tell us more about his project, where he thinks Apple could improve, and why Intel-based Macs are more flexible when it comes to supporting non-Apple monitors, among other things.

El Reg: So, what are the problems?

Tth: Apple is probably one of the biggest innovators, always willing to push the envelope and design things better. And for these state-of-the-art innovative products, customers are willing to pay a higher price. Two years ago, Apple delivered the Pro Display XDR, the ultimate monitor for creative professionals, with an impressive 6K resolution and 1600 nits of brightness in a widescreen format.

Yet, few people outside the audiovisual profession can justify five grand for a monitor one that doesn't even come with a stand for that price. Hilarious reviews were written about it on Amazon, and even competitors like MSI took their turn at mocking the steep price of Apple's best monitor.

It's no surprise that many buyers of high-end Macs end up buying a non-Apple monitor instead. And that's when their troubles begin.

It all comes down to font and widget scaling, and resolution independence. What Apple calls HiDPI mode is just the OS recognizing the plugged display operates at a super-high pixel count and scaling the desktop and user interface accordingly. It also helps if you can fine-tune custom resolutions to match your display panel's native resolution so that the image isn't washed up by hardware rescaling.

Well, bad news: none of the above seems to be happening in M1-based Macs. And worse, previous workarounds for custom resolutions that used to work in Intel-based machines fail to work with the M1.

Can you please explain the problem with these 5K2K and QHD monitors working perfectly fine on PCs and looking bad on M1 Macs so much so some users end up returning them?

Macs can handle most displays at their native resolution just fine, including QHD, wide, ultra-wide, and double-wide displays. The problem is that on most displays, resolution selection is quite limited. This affects even Apple's XDR Display.

On some displays, like those sub-4K displays with 1080p or 1440p resolutions, Apple Silicon Macs do not allow high-resolution display modes, namely HiDPI, and does not do scaling well. This results in a low-res desktop experience locking the user with too small or too big fonts and GUI, and there is no way to change that. This is OK for 1080p displays, but in case of a 24" 1440p QHD display, for example, the resulting fonts are just too small and the user cannot lower the resolution while retaining clarity because of the disabled HiDPI support.

And what about M1 Macs not supporting the maximum resolution of certain monitors?

There are some displays that have an erroneous EDID table, which describes the resolutions accepted by the display as well as the optimal resolution. This is usually not a big problem, as virtually all desktop operating systems allow the user to choose a resolution of their liking. MacOS was always more restrictive in this regard, but at least in the past, Intel Macs gave pro users the means to override the faulty EDID table on the software side or add custom resolutions.

This feature is completely missing for M1 Macs; there is no accessible way to add custom resolutions and display timings, which is unprecedented in the desktop OS space. This is mainly because the Apple Silicon graphics drivers are derived from iOS and iPad OS, which is on one hand great, but on the other hand rather limiting these devices do not really need to support all kinds of various third-party displays.

That certainly seems fixable?

As this is mostly a macOS issue, Apple could fix this problem. They need to give the pro users the ability to define custom resolutions and display timings; enable HiDPI rendering for all displays; give more granular options for scaled resolutions; and allow higher scaled resolutions.

Why is BetterDummy the right solution to the problem?

Ultra-wide display users face several challenges with M1 Macs in terms of resolution. Early M1 macOS versions did not properly support some of the aspect ratios and users had no way to define custom resolutions to fix this as with Intel Macs. Later macOS versions, as far as I know, added support for these aspect ratios. Custom resolution support is still missing.

Selecting a new dummy monitor to create in BetterDummy (Source)

But even with this, the lack of HiDPI for the most common 1080p or 1440p wide displays is a problem. Even for 5K2K displays the issue is that even though HiDPI is supported, the resolution options are limited, the desktop and fonts look unnaturally magnified, and the user has no option to scale the display in a way that feels right. BetterDummy attempts to solve these issues.

And for all monitors?

BetterDummy solves the problem of the lack of HiDPI resolution mostly beneficial for 1440p displays or the too-restrictive scaled resolution problem beneficial for all displays as well as solving some other issues, such as customizable resolutions for headless Macs used as servers via Screen Sharing or Remote Management, etc.

For 5K2K displays, which translate to 2.5K1K when using HiDPI, the benefit is that the user can create for example a 8K3K virtual screen, use HiDPI mode, and scale it to the native display resolution. Tthis will give the user a bigger desktop (approximately 4K1.5K) while still retaining the clarity of the display.

Read the original:

Why your external monitor looks awful on Arm-based Macs, the open source fix and the guy who wrote it - The Register

Germany’s new coalition government backs the Public Money, Public Code initiative – Neowin

Following the elections in September, Germany is set to get a new coalition government made up of the Social Democrats, Alliance 90/The Greens, and the Free Democratic Party. According to The Document Foundation, which has been reading the coalition agreement, the new government will embrace the notion of Public Money, Public Code (PMPC), a concept that has been promoted by the Free Software Foundation Europe (FSFE) for a number of years.

Essentially, PMPC says that any software thats created using taxpayers money should be released as Free Software, essentially, if the public funds the development of software, the public should also have access to the code and be able to reuse it for their own projects. The PMPC calls for legislation ensuring publicly funded software is released under a Free and Open Source Software (FOSS) license.

The Document Foundation highlighted two sections from the coalition agreement, the first reads:

Development contracts will usually be commissioned as open source, and the corresponding software is generally made public.

The second says:

In addition, we secure digital sovereignty, among other things through the right to interoperability and portability, as well as by relying on open standards, open source and European ecosystems, for example in 5G or AI.

The Document Foundation, which is responsible for the FOSS office suite, LibreOffice, said that its encouraged by the commitments made by the new coalition. The coalitions commitments surface just a week or so after the German state of Schleswig-Holstein revealed it would be installing Linux on 25,000 of its computers in a cost-cutting exercise.

See the original post:

Germany's new coalition government backs the Public Money, Public Code initiative - Neowin

Microsoft to 600 million Indians: feel free to hand over some data – The Register

Microsoft's social network LinkedIn has added a Hindi version of its service.

File this one under "what took you so long?" because, as LinkedIn's announcement notes, over 600 million people speak Hindi. That makes it the third-most-spoken language in the world, behind English and Mandarin. LinkedIn already serves languages with far fewer speakers, including Norwegian or Thai.

That the service has amassed over 82 million Indian users its second-largest national population without supporting Hindi suggests the network's reasoning: English is widely spoken in India and very widely used in business, academia, the media, and of course the technology industry.

But LinkedIn wants more users, so has added the extra language.

"You will now be able to create your LinkedIn profile in Hindi, making it easier for other Hindi-speaking members and recruiters to find you for relevant opportunities," announced LinkedIn's country manager Ashutosh Gupta. "You can also access the feed, jobs, messaging, and create content in Hindi.

"As the next step, we're working towards widening the range of job opportunities available for Hindi-speaking professionals across industries, including more banking and government jobs," Gupta added.

Left unspoken is that LinkedIn charges for job ads, mines user-provided data to target ads, and sells access to members' career histories and other data through its premium programs. Recruitment consultants use those histories to create their own databases.

Gupta has promised Hindi speakers that they'll soon see a feed of useful info and job ads in their language.

The social network won't stop at Hindi. Gupta's post promises the outfit "will continue to evaluate other regional languages as we strive to create equitable economic opportunities for every member of the workforce, and to help diverse professional communities come together on LinkedIn."

Nearly 100 million Indians speak Bengali, while more than 80 million speak either Marathi or Telugu. All three language groups are larger than many already served by LinkedIn. The Register fancies it therefore won't be long before LinkedIn adds more Indian languages to its offering especially as the regions in which they are spoken become home to more service industries.

India's Intermediary Guidelines and Digital Media Ethics Code a regulation that requires identification of users, removal of some content, and a fast-acting grievance mechanism will almost certainly apply to LinkedIn.

The Code has been widely criticised as effectively allowing India's government to break encryption.

It is also popular with many. Indian attitudes to social media have hardened in recent years as operators have been seen to ignore cultural norms, spread disinformation, and sometimes espouse a neo-colonial mission to civilise that is not appreciated.

When LinkedIn carries material that offends, leaks data, or endures another round of mass scraping, Microsoft India will need to brace for some backlash. And if LinkedIn's Hindi-speaking users don't take kindly to the service's standard fare endless weak rehashes of Ted talks, memes about good attitude costing nothing, or homilies about digital transformation that backlash could be fierce.

Read more from the original source:

Microsoft to 600 million Indians: feel free to hand over some data - The Register

India reveals home-grown server that won’t worry the leading edge – The Register

India's government has revealed a home-grown server design that is unlikely to threaten the pacesetters of high tech, but (it hopes) will attract domestic buyers and manufacturers and help to kickstart the nation's hardware industry.

The "Rudra" design is a two-socket server that can run Intel's Cascade Lake Xeons. The machines are offered in 1U or 2U form factors, each at half-width. A pair of GPUs can be equipped, as can DDR4 RAM.

Cascade Lake emerged in 2019 and has since been superseded by the Ice Lake architecture launched in April 2021. Indian authorities know Rudra is off the pace, and said a new design capable of supporting four GPUs is already in the works with a reveal planned for June 2022.

The National Supercomputing Mission designed the servers and certified them to run the Trinetra HPC interconnect it has previously developed. The Mission is currently talking to manufacturers as it wants to put 5,000 locally-built Rudra machines into production.

Server-builders are not hard to find and plenty operate at scale. Just what Rudra offers that India can't source elsewhere is not clear. But the debut of the Rudra design was more about politics than tech: In October 2020 India announced plans to foster home-grown supercomputers that feature Indian tech. Rudra shows that mission is on track but also far from being able to offer the full stack contemplated at the 2020 launch.

Rajeev Chandrasekhar, minister of state for electronics and information technology & skill development and entrepreneurship, did reveal that some progress towards India's pursuit of its own microprocessors has also progressed. India currently developers two modestly-specced RISC-V CPUs named Shakti and Vega and hopes they will one day meet the nation's needs and be used around the world. With the Shakti E-Class built on a 180nm process and running at between 75Mhz and 100MHz, India is not yet a threat to incumbent market leaders. Chandrasekhar announced that a national competition to improve local CPU tech has been narrowed to ten finalists.

The minister also announced a National Blockchain Strategy [PDF] that calls for the establishment of a national blockchain platform that offers a sandbox developers can use to test applications that could benefit from the distributed ledger tech.

The Strategy calls for the government to offer Blockchain-as-a-service to government within two years, and for wide use of Blockchain and its integration with clouds and the internet of things at the end of a five-year initial development phase. The tech is seen as being most applicable to e-government services, but also to have potential to secure intellectual property and improve transactions across India's economy.

Read more from the original source:

India reveals home-grown server that won't worry the leading edge - The Register

India can be the leader in Web3, says Anandan – Livemint

India has the unique opportunity to become the global leader in Web3, but needs to get its regulatory and legal frameworks in place, said Rajan Anandan, managing director, Sequoia Capital.

The concept of Web3 is a decentralized version of the Internet that runs on open-source code such as a public blockchain, the underlying technology for cryptocurrencies.

Anandan told the Hindustan Times Leadership Summit (HTLS) 2021 on Friday that he was delighted by the governments decision to not ban cryptos and come up with a legal and regulatory framework instead.

However, he said, crypto is just one small part of Web3.

Web3 is very, very important, whether its NFTs (non-fungible tokens), gaming, or Defi (distributed finance). The kind of innovation that were seeing in Defi is extraordinary," he added.

In the second half of 2021, Sequoia Capital India made 19 investments in Web3 startups, said Anandan.

He pointed out that many entrepreneurs from India, China, Korea, Japan, the US, UK and Australia are moving to Singapore because it has a regulatory and legal framework for Web3. Anandan said the startup ecosystem in India is no longer only about e-commerce, fintech, mobility, SaaS (software as a service) or development tools.

Over the next five years, we are going to see a dozen unicorns in agri-tech, (and) we are probably going to see at least a dozen unicorns in digital health. We are going to see two or three dozen unicorns in ed-tech. In fintech, were going to have 100 unicorns," he said

Anandan expressed surprise at the number of initial public offerings (IPO) by tech startups in India. However, he cautioned that going public is just one of the milestones in the journey to building an enduring company.

To be a truly enduring company, the real question is whats going to happen in the next five years," Anandan said.

He urged startups to be very careful with their spending.

Its important to keep in mind that funding has cycles. We are definitely at the high part of the cycle right now, but cycles turn. Were going to go through a period where its not going to be like this at all. Its going to be very difficult to raise capital, and valuations are going to get adjusted."

According to Anandan, public market investors have very different expectations of a companys performance.

I think if founders can raise capital, they should do so, but they should be very prudent about how they spend it over the next few years," he added.

Upasana Taku, co-founder, Mobikwik, said: Key learning from the IPO preparation has been that public market investors have a slightly different lens from private investors. Investors in the capital markets are looking for companies where the business model is very clear, and the financial performance has been demonstrated year over year for at least two to three years, and there is a clear path to profitability."

She added, having been following a sustainable growth strategy already for the last five years, it was a pleasure to bring that story to the market to the investors.

Taku said that the ecosystem is still very male-centric. However, she said its going to become easier as we go forward. There will be more women-led companies that come to the capital markets.

Mobikwik had filed for an IPO in July. Ahead of the IPO, the payment company had turned unicorn in October.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

See the article here:

India can be the leader in Web3, says Anandan - Livemint

Whistleblower Edward Snowden Issues Crypto Gaming Warning, Highlights Potential Unethical Practices – The Daily Hodl

Whistleblower Edward Snowden says that the use of non-fungible tokens (NFT) in crypto gaming comes with certain consequences.

In a new interview on Parachains, a Polkadot and Kusama-focused YouTube channel, Snowden says he is against the monetization of gaming platforms that use NFTs because they utilize a false sense of scarcity.

We have people that are trying to sort of maybe theyre not even trying to but the ultimate result of what theyre doing is they are injecting an artificial sense of scarcity into a post-scarcity domain. I think that is actually an inherently anti-social urge here.

The former Central Intelligence Agency (CIA) employee says gamers who seek a virtual escape can potentially be put at a disadvantage from the NFT-based gaming business model.

If you think about the world that people are retreating from to their games, where they live in a cold bare box, if theyre lucky enough to even have a home in some overly expensive city where they spend all their time working, they get home exhausted.

They make their cheap meal, and then they turn on their device to escape from all that and then in their digital world, where theyre on a beautiful island, they build a beautiful home, and they want to change the color of the wall, and you got to pay $19.99 for the wall or for a token to let you roll for the potential to maybe recolor your wall. There is something horrible and heinous and tragic in that to me.

Snowdens comment comes following the exponential rise of gaming altcoins and the crypto-based metaverse. According to Snowden, the crypto sector is at risk of facilitating unethical practices.

I think the community should very much be trying to bend the arc of development away from injecting artificial unnecessary scarcity entirely for the benefit of some investor class into these post-scarcity domains.

One of the promises, one of the privileges of technology, is that it frees us from material limits that only exist in a material space. To try to reimpose material in immaterial space, I think is a little bit unethical.

o

Featured Image: Shutterstock/Julia Ardaran and Salamahin

Visit link:
Whistleblower Edward Snowden Issues Crypto Gaming Warning, Highlights Potential Unethical Practices - The Daily Hodl