Research principle – Metro Research http://metroresearch.org/ Mon, 21 Nov 2022 10:02:30 +0000 en-US hourly 1 https://wordpress.org/?v=5.9.3 https://metroresearch.org/wp-content/uploads/2021/06/cropped-icon-32x32.png Research principle – Metro Research http://metroresearch.org/ 32 32 Applying World Cup Wisdom to the Stock Market https://metroresearch.org/applying-world-cup-wisdom-to-the-stock-market/ Mon, 21 Nov 2022 09:30:00 +0000 https://metroresearch.org/applying-world-cup-wisdom-to-the-stock-market/ SINGAPORE – When the The World Cup kicks off in Qatar on November 20 the stakes are high for the 32 countries vying for the top prize in international football. It’s not just fame and fame that are at stake for managers, coaches and players. Believe it or not, the teams would also fight for […]]]>

SINGAPORE – When the The World Cup kicks off in Qatar on November 20 the stakes are high for the 32 countries vying for the top prize in international football. It’s not just fame and fame that are at stake for managers, coaches and players.

Believe it or not, the teams would also fight for their country’s stock markets and economy.

The euphoria of winning or losing the World Cup apparently extends to the stock market performance of the countries involved.

The stakes are high. On average, the research found that after a country loses in the World Cup, its stock market performance will produce significantly below average returns the following day. However, the research did not find a corresponding positive effect for the stock markets of the countries whose teams won.

There is a common saying in football that no one remembers the losing team in the final.

And, according to a study, there’s more at stake than just bad memories for losers. Coming second globally is by no means a shabby effort, but countries would really like to avoid losing in the final.

That’s because World Cup tournament runners-up have traditionally had terrible times in the stock market after the loss.

In the first month after the World Cup final, seven of the last nine finalist stock markets posted performances 1.4% below the global market average.

And the slump continues over the next two months, with an average relative decline of 5.6% over the three months.

The good news is that things look up after that and a year from now the loser’s stock market would be just 0.4% lower.

No time for markets during matches

The lure of the World Cup is so compelling that it has a noticeable effect on financial market activity as traders – like everyone else – pay attention to it.

Research has shown that when a particular team is playing in a game, their national stock market will be quite lackluster during that time.

In one example, the number of transactions on the Chilean stock market dropped by 83% when the team was playing.

In fact, the exchanges of Latin American teams are among the main markets affected when their national team has played

The drop in activity begins before the match kicks off and continues even up to 45 minutes after the final whistle. At halftime, the drops are always around 35%.

The report also noted further drops of around 5% in business activity when a goal is scored.

The limited attention that investors pay to stock markets during match days affects the price discovery process due to less liquidity in the market. This means that relevant news that could generally affect the markets would not be reflected in market prices as quickly as usual or could cause larger price swings due to lack of liquidity.

If you think choosing which team will win the World Cup is difficult, stock markets are even more unpredictable.

Just like in a football match, extreme emotions of greed and fear reign in the financial markets.

But the principles of football can guide investment strategies.

]]>
White House ‘AI Bill of Rights’ explains how to make artificial intelligence safer https://metroresearch.org/white-house-ai-bill-of-rights-explains-how-to-make-artificial-intelligence-safer/ Tue, 15 Nov 2022 22:39:00 +0000 https://metroresearch.org/white-house-ai-bill-of-rights-explains-how-to-make-artificial-intelligence-safer/ Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there are very few policies or regulations governing the development and use of AI systems in the United States. Tech companies have largely been left to regulate themselves in this area, potentially leading to decisions and situations that have drawn […]]]>

Despite the important and ever-increasing role of artificial intelligence in many parts of modern society, there are very few policies or regulations governing the development and use of AI systems in the United States. Tech companies have largely been left to regulate themselves in this area, potentially leading to decisions and situations that have drawn criticism.

Google dismissed an employee who have publicly raised concerns about how a certain type of AI may contribute to environmental and social issues. Other AI companies have developed products that are used by organizations like the Los Angeles Police Department where they were shown strengthen existing policies based on racial bias.

There are governments recommendations and tips regarding the use of AI. But in early October 2022, the White House Office of Science and Technology Policy significantly added to the federal guidelines by issuing the Blueprint for an AI Bill of Rights.

The Office of Science and Technology says the protections outlined in the document should be applied to all automated systems. The plan sets out “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The hope is that this document can serve as a guide to help prevent AI systems from limiting the rights of US residents.

As a computer scientist who studies the ways in which people interact with AI systems – and in particular how anti-Blackness mediates these interactions – I find this guide to be a step in the right direction, even if it has some holes and n is not enforceable.

Improving systems for all

The first two principles aim to address the safety and effectiveness of AI systems as well as the major risk that AI promotes discrimination.

To improve the safety and effectiveness of AI, the first principle suggests that AI systems should be developed not only by experts, but also with direct input from the people and communities who will use and be affected by them. systems. Exploited and marginalized communities often have to deal with the consequences of AI systems without having much to say about their development. Research has shown that the direct and genuine involvement of the community in the development process is important to deploy technologies that have a positive and lasting impact on these communities.

The second principle concerns the known algorithmic discrimination issue within AI systems. A well-known example of this problem is how mortgage approval algorithms discriminate against minorities. The document asks companies to develop AI systems that do not treat people differently based on their race, gender or otherwise protected class status. He suggests companies use tools like equity assessments that can help assess the impact of an AI system on members of exploited and marginalized communities.

These first two principles address the major issues of bias and equity encountered in the development and use of AI.

Confidentiality, transparency and control

The final three principles describe ways to give people more control when interacting with AI systems.

The third principle concerns data confidentiality. It aims to ensure that people have more say in how their data is used and are protected against abusive data practices. This section aims to address situations where, for example, companies use misleading design to manipulate users give their data. The blueprint calls for practices such as not taking a person’s data unless they consent and asking in a way that person understands.

The next principle deals with “notice and explanation”. This highlights the importance of transparency – people need to know how an AI system is being used as well as how an AI contributes to outcomes that might affect them. Take, for example, the New York City Administration for Children’s Services. Research has shown that the agency uses outsourced AI systems to predict child abusesystems that most people don’t realize are being used, even when under investigation.

The AI ​​Bill of Rights provides a directive that people in New York in this example who are affected by the AI ​​systems in use must be informed that an AI was involved and have access to an explanation of what the AI ​​system was doing. did. Research has shown that building transparency into AI systems can reduce the risk of errors or misuse.

The final principle of the AI ​​Bill of Rights outlines a framework for human alternatives, consideration, and feedback. The section clarifies that individuals should be able to opt out of the use of AI or other automated systems in favor of a human alternative where reasonable.

As an example of how these last two principles might work together, consider the case of a person applying for a mortgage. They would be informed if an AI algorithm was used to review their application and would have the option to opt out of this use of AI in favor of a real person.

Smart guidelines, no enforceability

The five principles set out in the AI ​​Bill of Rights address many of the issues raised by academics regarding the design and use of AI. Nevertheless, it is a non-binding and non-enforceable document currently.

It may be too much to hope that industry and government agencies will put these ideas into practice in the exact way advocated by the White House. If the ongoing regulatory battle over data privacy offers any guidance, tech companies keep pushing for self-regulation.

Another problem I see with the AI’s bill of rights is that it doesn’t directly call systems of oppression – As racism or sexism – and how they can influence the use and development of AI. For example, studies have shown that inaccurate assumptions built into AI algorithms used in healthcare have led to worst care for black patients. I argued that anti-black racism should be directly addressed when developing AI systems. While the AI ​​Bill of Rights addresses the ideas of bias and fairness, the lack of attention to systems of oppression is a notable hole and a known issue in AI development.

Despite these shortcomings, this plan could be a positive step toward better AI systems, and perhaps the first step toward regulation. A document like this, while not a policy, can be a powerful reference for people advocating for changes in how an organization develops and uses AI systems.


Christophe DancyAssociate Professor of Industrial and Manufacturing Engineering and Computer Science and Engineering, Penn State

This article is republished from The conversation under Creative Commons license. Read it original article.

]]>
How they could drastically increase energy efficiency https://metroresearch.org/how-they-could-drastically-increase-energy-efficiency/ Sat, 12 Nov 2022 08:26:06 +0000 https://metroresearch.org/how-they-could-drastically-increase-energy-efficiency/ Traditionally, “quantum supremacy” is sought from the point of view of raw computing power: we want to calculate (much) faster. But the question of its energy consumption could now also justify research, as current supercomputers sometimes consume as much electricity as a small town (which could actually limit the increase in their computing power). Information […]]]>

Traditionally, “quantum supremacy” is sought from the point of view of raw computing power: we want to calculate (much) faster.

But the question of its energy consumption could now also justify research, as current supercomputers sometimes consume as much electricity as a small town (which could actually limit the increase in their computing power). Information technologies, for their part, represented 11% of global electricity consumption in 2020.

Why focus on the energy consumption of quantum computers?

Since a quantum computer can solve problems in hours, while a supercomputer can take tens of billions of years, it is natural to expect it to consume much less energy. However, making such powerful quantum computers will require us to solve many scientific and technological challenges, potentially spanning one to several decades of research.

A more modest goal would be to create less powerful quantum computers capable of solving calculations in a time relatively comparable to supercomputers but using much less energy.

This potential energy advantage of quantum computing has already been discussed. from google Sycamore quantum processor consumes 26 kilowatts of electrical energy, much less than a supercomputer, and runs a test quantum algorithm in seconds. Following the experiment, the scientists came up with classical algorithms to simulate the quantum algorithm. The first proposals for necessary classical algorithms much more energy – which seemed to demonstrate the energy advantage of quantum computing. However, they were soon followed by other proposalswhich were much more energy efficient.

The energy advantage is therefore still subject to caution and constitutes an open research topic, especially since the quantum algorithm produced by Sycamore has not yet identified a “useful” application.

Superposition: the fragile phenomenon at the heart of quantum computing

To know whether quantum computers can be expected to provide an energy advantage, it is necessary to understand the fundamental laws by which they operate.

Quantum computers manipulate physical systems called qubits (for quantum bits) to perform a calculation. A qubit can take on two values: 0 (the “ground state”, with minimum energy) and 1 (the “excited state”, with maximum energy). It can also occupy a “superposition” of 0 and 1. How we interpret overlays is still the subject of heated philosophical debatebut, to put it simply, this means that the qubit can be “both” in state 0 and state 1 with certain associates “probability amplitudes”.

]]>
Gate-tunable heterojunction tunnel triodes based on 2D metal selenide and 3D silicon https://metroresearch.org/gate-tunable-heterojunction-tunnel-triodes-based-on-2d-metal-selenide-and-3d-silicon/ Mon, 07 Nov 2022 14:30:01 +0000 https://metroresearch.org/gate-tunable-heterojunction-tunnel-triodes-based-on-2d-metal-selenide-and-3d-silicon/ Nature Electronics (2022). DOI: 10.1038/s41928-022-00849-0″ width=”800″ height=”530″/> Credit: Miao et al, Natural electronics (2022). DOI: 10.1038/s41928-022-00849-0 Electronic engineers around the world are trying to improve the performance of devices, while reducing their energy consumption. Tunneling field-effect transistors (TFETs), an experimental class of transistors with a unique switching mechanism, could be a particularly promising solution for […]]]>

Nature Electronics (2022). DOI: 10.1038/s41928-022-00849-0″ width=”800″ height=”530″/>

Credit: Miao et al, Natural electronics (2022). DOI: 10.1038/s41928-022-00849-0

Electronic engineers around the world are trying to improve the performance of devices, while reducing their energy consumption. Tunneling field-effect transistors (TFETs), an experimental class of transistors with a unique switching mechanism, could be a particularly promising solution for the development of low-power electronics.

Despite their potential, most silicon-based and III-V heterojunction TFETs exhibit low on-current densities and on-off current ratios in some modes of operation. Fabricating these transistors using 2D materials could help improve electrostatic control, potentially increasing their current densities and on/off ratios.

Researchers from the University of Pennsylvania, the Chinese Academy of Sciences, the National Institute of Standards and Technology, and the Air Force Research Laboratory have recently developed new heterojunction tunnel triodes based on van der Waals heterostructures formed from 2D metal selenide and 3D silicon. These triodes, presented in an article published in Natural electronicscould outperform other TFETs presented in the past in terms of current densities and on/off ratios.

“This paper is based on realizing tunneling transistors or switching devices based on 2D materials,” Deep Jariwala, one of the researchers who conducted the study, told TechXplore. “It’s a well-known idea that many people have been trying to work on and solve for a decade now. The problem has always been device performance to make a strong case.”

To improve the performance of tunneling switching devices in terms of ON/OFF current ratios, sub-threshold oscillation and ON current density, some studies have attempted to develop devices using only silicon and semiconductors. III-V conductors or 2D semiconductors. Although some of these proposed devices performed better than others, their performance seemed to be impaired in at least one relevant dimension.

“Thanks to our work, we have shown that when 2D InSe or WSe2 is combined with silicon, the three main performance characteristics of the device mentioned above can be simultaneously enhanced,” explained Jariwala.

To fabricate their heterojunction tunnel triodes, Jariwala and his colleagues stamped an InSe crystal onto a heavily p-doped silicon wafer. Subsequently, they created contacts using lithography, a printing method, deposited top gate dielectric and patterned gate electrodes.

“One of the main advantages of our tunable gate tunnel triodes is that they are based on silicon, which is the underlying material of all microprocessors,” Jariwala said. “In addition, they exhibit some of the steepest sub-threshold oscillations, ON/OFF current ratios, and On current density for tunneling devices, making them some of the most efficient and power-saving switches energy based on tunneling phenomena.”

In the first tests, the triodes created by the researchers achieved sub-threshold slopes as low as 6.4 mV decade-1 and average sub-threshold slopes of 34.0 mV decade-1 more than four decades of drain current. Remarkably, they also exhibited an on/off current ratio of around 106 and an on-state current density of 0.3 µA µm-1 at a drain bias of –1 V.

“We have shown that InSe works as an excellent 2D semiconductor in combination with good old silicon to enable some of the most energy-efficient switching devices,” Jariwala said. “The possible implications of this discovery are immense, since (not Moore’s Law for device reduction) is the key requirement/need of the hour for device innovation in microelectronics.”

The heterojunction tunnel triodes introduced by Jariwala and his colleagues could pave the way for the realization of more efficient low-power electronic devices. In principle, their design could also be extended to wafers, since InSe-based 2D materials can be directly grown on .

“In our next studies, we plan to increase the growth of the material to make it more practical and to reduce the dimensions of the device to improve performance even more,” added Jariwala. “Demonstrating material growth over a large area on a wafer will be a milestone that we hope to achieve by next year.”

More information:
Jinshui Miao et al, Two-dimensional metal selenide and three-dimensional silicon-based heterojunction tunneling triodes, Natural electronics (2022). DOI: 10.1038/s41928-022-00849-0

© 2022 Science X Network

Quote: Gate-tunable heterojunction tunneling triodes based on 2D metal selenide and 3D silicon (2022, November 7) retrieved November 7, 2022 from https://techxplore.com/news/2022-11-gate-tunable-heterojunction -tunnel-triodes-based.html

This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.

]]>
Strengthen your organization against insider threats https://metroresearch.org/strengthen-your-organization-against-insider-threats/ Tue, 01 Nov 2022 14:15:00 +0000 https://metroresearch.org/strengthen-your-organization-against-insider-threats/ Internal threatswhether intentional or not, can have a devastating effect on a business, often resulting in financial and reputational loss. Intentional insider threats are cyber security Threats from people working directly with an organization, such as employees, contractors, or business partners, who want to steal data for malicious purposes. “Typically, it’s a disgruntled employee or […]]]>

Internal threatswhether intentional or not, can have a devastating effect on a business, often resulting in financial and reputational loss.

Intentional insider threats are cyber security Threats from people working directly with an organization, such as employees, contractors, or business partners, who want to steal data for malicious purposes.

“Typically, it’s a disgruntled employee or former employee who still has access to the system, or a ‘hacktivist’ employee who was hired for the sole purpose of infiltrating the company,” he said. declared Jeremiah Masonsenior vice president of product management at authID, a biometric authentication provider.

Unintentional insider threats are those committed by insiders who inadvertently put their organization at risk. They can do this by clicking a phishing link in an email, breaking company policies, or even accidentally sending sensitive company data to the wrong person.

Detect insider threats

Insider threats are currently difficult to detect because typical threat detection tools are designed to detect external attacks.

“They are made to look for things from outside,” said Joseph Blankenship, vice president, director of research at Forrester Research. “They’re not necessarily made to examine insider threats.”

Organizations can use analytic tools that detect changes in user behavior, however. For example, companies might want to be careful if employees are accessing and downloading huge amounts of data that they don’t need to do their job, Blankenship said.

Additionally, vendors are beginning to offer tools specifically designed to detect insider threats.

“For example, Code42 has a tool specifically designed for insider threats,” Blankenship said. “And now you’re starting to see these tools designed for that start looking at how insider threats are different from other threats.”

Protect your data from insider threats

Insider threats have increased by 44% over the past two years, while the cost per insider threat incident has increased by a third since 2020 to $15.38 million, according to the Ponemon Institute. 2022 Global Report on the Cost of Insider Threats.

Ponemon reports that 67% of organizations experienced between 21 and 40 incidents per year, up from 60% in 2020. The report also notes that it now takes organizations longer to contain insider threats than in 2020, the number of days going from 77 to 85.

Therefore, it is essential that businesses take steps to protect their systems and data against these threats.

Here are some tips to help you achieve this.

Develop/revise your security strategy

Companies should start by developingor, if it already exists, reviewtheir security policies to identify and address security vulnerabilities.

“Clearly identify your risks and vulnerabilitiesas well as the technologies, policies and procedures needed to mitigate them,” said Dominique Birolin, vice president of cybersecurity and compliance at Strive Consulting, a division of Planet Group. “Next, create a roadmap to implement the missing mitigation components and the metrics you’ll use to determine how well they work.”

Organizations’ strategic plans should also ensure that employees are both properly qualified and available to implement necessary security precautions to respond quickly to insider threats, Birolin said.

Apply the principle of least privilege

Companies should only give people access to the systems and data they need to do their job. Therefore, one of the most important steps, but also one of the most difficult to implement, is the principle of least privilege.

“We don’t necessarily want to give them carte blanche access,” Blankenship said.

To this end, organizations must thoroughly screen employees, contractors, and vendors before allowing them access to their data and systems.

“It’s about making sure the level of access is only what they need and nothing more,” said Justin Blackburn, threat detection engineer, AppOmni, security software as a service provider. “[It’s] also using role-based access controls and security group roles and proactively monitoring and auditing these to ensure that people have not been inadvertently granted access to resources to which they should not have access.

Rigorous permission and access management is essential and should include forms of multi-factor authentication,” added Timothy Morrischief security adviser at Tanium, a cybersecurity and systems management company. “Approval processes should include due diligence before granting access, with withdrawal when no longer needed.”

Morris also pointed out that threat hunting activities like reviewing the log by SecOps teams can also help in this regard “to point out suspicious and rare behaviors or patterns”.

Integrate anti-phishing modalities into daily routine

To protect against insider threats, organizations and their employees need to make security part of their daily routines, for example by prevent phishing attacks.

“Everyone has a level of responsibility in fighting phishing attacks,” said Jamie Moles, senior technical marketing manager at cybersecurity firm ExtraHop. “Positive reinforcement, ongoing training, and strong feedback loops are all key to making it stick.”

Phishing continues to be a key method hackers use to target employees, creating unwitting insider threats, he said.

“Today, threat actors target employees through sophisticated intelligence gathering, identifying people and positions to ensure they send ‘credible’ emails with lines of subject matter and relevant attachments,” Moles said. “These phishing emails can be almost impossible to identify as hoaxes.”

To that end, organizations need to think about how their technologies support their outreach and education efforts.

“IT managers should have a plan and tools in place to support mid-game intrusion detection, before [threat actors] are capable of exfiltrating or encrypting critical data,” Moles said.

For example, Moles said, companies should train every employee to:

  • Check the sender’s email address. “It’s often an easy red flag that users miss when they’re in a hurry or it looks like the note is coming from their boss or CEO,” Moles said.
  • Check the links. “Hover over the link to see the full URL, or better yet, Google the item to access the linked item for yourself,” Moles said.
  • Verify via different methods to determine the legitimacy of an email. “If the legitimacy of an email is suspect, contact the sender directly through another channel [or] a new email, or visit the company website or social media to connect directly,” Moles said.

Disable access for departing employees

When employees leave, organizations should immediately disable those employees’ access to systems and data. The same applies to suppliers and/or business partners when these partnerships end.

However, when it comes to employee departures, this is not quite enough.

“I think savvy companies also need to have processes in place that allow them to add another level of control and oversight over these employees,” said Terry Ray, Senior Vice President of GTM Data Security and Field Technical Director at Imperva, a cybersecurity software and services company. “Maybe when they give their two weeks notice, they are placed in a high-risk group and someone is tasked with monitoring their activity every day to understand every file or database they access. “

keep learning

]]>
Incentives linked to production – like the priest’s egg, good in part https://metroresearch.org/incentives-linked-to-production-like-the-priests-egg-good-in-part/ Wed, 26 Oct 2022 01:06:25 +0000 https://metroresearch.org/incentives-linked-to-production-like-the-priests-egg-good-in-part/ In its excessive and obsessive desire to promote Make in India at all costs, the PLI program ignored certain fundamental economic concerns and principles. PLI showed results in the mobile phone segment of the electronics sector. Representative picture Launched in March 2020 targeting the electronics sector, the Centre’s Production Linked Incentives (PLI) program is the […]]]>

In its excessive and obsessive desire to promote Make in India at all costs, the PLI program ignored certain fundamental economic concerns and principles.

PLI showed results in the mobile phone segment of the electronics sector. Representative picture

Launched in March 2020 targeting the electronics sector, the Centre’s Production Linked Incentives (PLI) program is the Narendra Modi government’s attempt to boost high-cost manufacturing. The objective is for the sector to contribute 25% of GDP by 2025, compared to 16% currently. Naturally, this would also provide job opportunities.

The fact that there are as many schemes as there are key factors identified by the government in conjunction with Niti Aayog speaks to its unique anti-size character. The government realized that each industry has its own problems and opportunities and therefore avoided the blanket approach.

The other virtue of the PLI is that it is only awarded and disbursed when the results are shown in terms of additional production, which is why it is also called piece-rate incentive. It is also WTO compliant as it avoids trade-distorting export subsidies and is rather secular – incentives are available regardless of whether you are exporting or serving the domestic market.

Read also : MSMEs in the engineering sector are looking for an incentive system linked to production

Advertising

The reason the inventory at this point in the program is done here is to see why it hasn’t pushed production significantly except in the mobile phone sector where all the big boys like Samsung and Apple are excited and not not only eagerly adopt the program, but also start exporting.

Made in India

But as Raghuram Rajan points out, in his excessive and obsessive desire to promote manufacturing in India at all costs, he ignored some of the fundamental economic concerns and principles. For example, PLI has completely overtaken the MSME sector – accounting for 36% of domestic production and 40% of exports – in its effort to woo the big guys in each sector. This goes against his avowed Swadeshi board which swears by small domestic producers.

He also did all he could to promote domestic manufacturing by raising import duties, lest domestic manufacturing be considered unworthy and imports be preferred. Rajan cites the example of the mobile phone industry.

Production incentive

Import duty on mobile phones was raised to 20% in April 2018 as a precursor to PLI bait manufacturing in India. The result – iPhone 13 pro max available in Chicago for Rs 92,500 while the same model in India is priced at Rs 1.29 lac, a markup of almost 40% thanks to the domestic industry protection swadeshi principle that denies Indian buyers the benefits of dumping consisting of lower prices for imported products. In other words, Apple like Sony of the 1970s would have been happy to sell cheap iPhones in India vis-à-vis the United States had it not been for PLI.

Other problems with PLI

Not only that, value addition is completely ignored by PLI to such an extent that a cell manufacturer is pampered with a PLI subsidy even if he imports all the parts and only assembles them in India. This effectively turns the 6% PLI subsidy into a significant subsidy of 25% to 30% value added given that the incentive is on the price charged while the value added is only a tiny fraction of it.

He also doesn’t care about cost reduction as long as production is increased. This is because the manufacturer has no incentive to care about cost and price because the incentive is based on quantity.

PLI is also criticized for its borderline fanciful selectivity – the textile industry tops the list of 13 sectors notified to date but strangely the leather industry is left behind. The simplest answer is that the government has a limited budget allocation for each scheme, so much so that when the actual eligible incentive exceeds the allocation, there is incentive rationing among claimants.

Read also : Union Cabinet approves production-related incentives worth ₹2,000,000 for 10 sectors

Automotive giants are unable to meet their incremental revenue commitment given the shortage of chips so critical in modern cars. But the PLI regime is a swift, not tolerating any exceptions, even for genuine commercial reasons. The dislocation of the supply chain with the ongoing war in Ukraine does not engender sympathy for the lack of production either.

Globally, manufacturing is being incentivized in various ways – such as the creation of special economic zones; tailor-made logistics and specific incentives; tax and credit based systems and research and development based approaches. India’s quantity-based ILP system is akin to the “piece rate” method. Apart from being simple, it is considered to be the best method to ensure higher productivity. Special R&D incentives, while seemingly progressive, give charlatans a head start, as the Indian income tax alliance shows.

PLI showed results in the mobile phone segment of the electronics sector. Apple suppliers led by Foxconn have pledged to produce at least Rs 25,000 crore of mobile devices in FY23 from April 1.

This is a triple jump in the commitment of minimal incremental production in FY22. But as previously stated, this comes at a high price for domestic customers with high import duties discouraging actually imports. In short, the PLI is the old wart protection of the domestic industry model and all without worrying about the domestic consumer who otherwise might have gorged on cheap imports.

]]>
One country, two systems? Hong Kong looks set to diverge from China on digital assets https://metroresearch.org/one-country-two-systems-hong-kong-looks-set-to-diverge-from-china-on-digital-assets/ Tue, 18 Oct 2022 09:47:09 +0000 https://metroresearch.org/one-country-two-systems-hong-kong-looks-set-to-diverge-from-china-on-digital-assets/ Hong Kong is preparing a series of regulatory reforms to make the city more attractive to businesses in the cryptocurrency and blockchain sectors, after losing businesses to Singapore amid fears that China’s ban on cryptocurrency crypto trading finally hits the town. Elizabeth Wong, a senior official at Hong Kong’s Securities and Futures Commission, or SFC, […]]]>

Hong Kong is preparing a series of regulatory reforms to make the city more attractive to businesses in the cryptocurrency and blockchain sectors, after losing businesses to Singapore amid fears that China’s ban on cryptocurrency crypto trading finally hits the town.

Elizabeth Wong, a senior official at Hong Kong’s Securities and Futures Commission, or SFC, said the city has a different crypto policy than mainland China and will not be affected by the general ban on crypto. cryptography on the continent.

“Hong Kong has one country, two systems principle,” Wong said Monday at the InvestHK conference in the city. “It is a constitutional principle that forms the basic foundation of Hong Kong’s financial markets,” said the director of licensing and head of the SFC’s fintech unit.

Hong Kong first attracted some of the biggest crypto exchanges in the world, such as FTX, run by billionaire Sam Bankman-Fried. But Bankman-Fried moved FTX’s headquarters from the city to the Bahamas in 2021, a move followed by the Crypto.com exchange moving to Singapore amid China’s ban on trading digital assets the same. year sparked concern that Hong Kong could follow.

Wong said on Monday that Hong Kong regulators initially took a “cautious” approach to the digital asset industry, such as a ban on retail investing on centralized crypto exchanges in the city.

Image: Elizabeth Wong speaking at Fintech Day | Ningwei Qin, Forkast

“Now may be a good time to really think carefully about whether we will continue with this requirement for professional investors,” Wong said. SFC has already eased some restrictions on retail investors, allowing service providers to sell them certain derivatives related to virtual assets starting in January, Wong noted.

Additionally, an anti-money laundering bill introduced in Hong Kong’s Legislative Council could establish a new licensing regime for digital assets if passed. “We hope this regulatory framework can enable the industry to have an orderly and sustainable development while balancing investor protection,” Wong said.

Gain territory?

Hong Kong lost to Singapore as cryptocurrency exchanges like Crypto.com and Huobi moved to the city-state, even as the Monetary Authority of Singapore warned the public of the high risks of investing in crypto and banned certain advertisements by exchanges.

But Singapore has at the same time declared its intention to become a web3 and blockchain-based hub for the financial sector.

Hong Kong’s apparent change in crypto regulations could be an attempt to stem the loss of business to its rival financial hub Singapore, said Anna Liu, chief legal officer of the Asian asset financial services group. end-to-end digital Hashkey Capital.

“Singapore has been pushing the development of the crypto industry since 2019 and I don’t think they will lose the advantage in the short term,” Liu said. However, both jurisdictions have their own strengths, and Hong Kong’s shift to a more Singapore-like attitude will increase its global competitiveness and likely set back some web3 companies, she added.

An updated policy statement on cryptocurrencies is also expected to be released during Hong Kong Fintech Week, which begins October 31, according to a Blog released by Hong Kong Financial Secretary Paul Chan on Sunday.

The statement will cover the city’s policy stance and virtual asset regulations to offer a “vision for Hong Kong’s development into an international virtual asset hub,” according to the blog.

“We see a huge opportunity for Hong Kong to regain its position as a virtual asset and Web 3 hub with a clear legal and regulatory framework,” said Victor Yim, head of fintech at local incubator Cyberport, during of the InvestHK conference on Monday. .

Cyberport is 100% owned by the local government, with about half of the city’s blockchain technology companies and 72% of digital asset start-ups as members.

Brian Chan, chief investment officer at Venture Smart Asia Hong Kong, was also beating the drum for the city at InvestHK. “As a gateway between mainland China and the West, Hong Kong is very well placed for us to acquire talented people and find high-quality developers to work with,” he said. declared.

The government can further support the web3 industry and attract talent through more clarification and transparency on regulations, Chan added. He said Hong Kong’s web3 industry would benefit from integration with the city’s financial market, which boasts of seventh largest stock exchange in the world.

]]>
Kron Telekomünikasyon Hizmetleri: Understanding the Lifecycle of a Data Breach https://metroresearch.org/kron-telekomunikasyon-hizmetleri-understanding-the-lifecycle-of-a-data-breach/ Sat, 15 Oct 2022 21:33:01 +0000 https://metroresearch.org/kron-telekomunikasyon-hizmetleri-understanding-the-lifecycle-of-a-data-breach/ Understand the lifecycle of a data breach With much of the business world on board with digital transformation and its demands, many questions about data usage have come to the fore. With the growing importance of data and data-driven workflows, cybersecurity issues have risen to prominence. Finding workable solutions to these problems requires an in-depth […]]]>

Understand the lifecycle of a data breach

With much of the business world on board with digital transformation and its demands, many questions about data usage have come to the fore. With the growing importance of data and data-driven workflows, cybersecurity issues have risen to prominence. Finding workable solutions to these problems requires an in-depth analysis of what data breach incidents tell us about enterprise IT infrastructures.

First of all, it is very important to realize that the damage caused by a data breach is not limited to the loss of data. In addition to data loss, a data breach can cause temporary or permanent damage to the business model, cause system downtime, lead to costly ransomware, and negatively impact corporate image.

The first step to minimizing the potential damage caused by data breaches, and even taking a series of cybersecurity measures by learning the right lessons from the past, is to properly analyze the data breach lifecycle process. Referring to the period of time between the first moment the breach occurs and the moment the breach is under control, the lifecycle can unfold in different ways depending on various factors such as the type of cyberattack.

We’ve put together some tips businesses need to know about the data breach lifecycle so they can integrate advanced data security into their IT infrastructure. In our statistically backed research, we have attempted to demonstrate why it is so important to properly analyze data breach cases.


Data breach lifecycle and root causes

When collecting sensitive data, the first question to answer is how a hacker runs their business. Understanding how hackers think and how they plan for cyberattacks can help you better prepare for an attack. In order to properly analyze and manage each stage of preparation, it is extremely important to master the stages of the life cycle.

Comprised of phases such as target selection and recognition, attack planning, attack execution, exploitation and lateral movement, and endgame, the lifecycle of a data breach represents a meticulously planned process for a hacker. We will describe the attack phases in detail from the perspective of the cyber attacker, but first we would like to explain the source of the vulnerabilities and security weaknesses that attract the attention of hackers in the phase of selection and recognition of target.

Organizations without advanced cybersecurity protocols are very likely to have both software and hardware vulnerabilities in their IT infrastructures. Security vulnerabilities resulting from device hardware structure, third-party software flaws, misconfiguration, compromised credentials, business email security (BEC), phishing attacks, Ransomware attacks and data leaks by malicious corporate individuals can lead to data breaches.

To avoid such problems and prevent data leaks, developing the right cybersecurity policies has become a necessity, not an option. Now that we’ve listed the possible lifecycle sources, let’s examine and analyze the breaches from a cyber attacker’s perspective.


Data Breach Ecosystem

Understanding the methods used and the paths taken by the cyber attacker in the data breach lifecycle, which consists of five phases, can help you more easily take certain preventive measures. For this reason, it may be useful to examine in detail what the five phases mean to a hacker.


Recognition of security vulnerabilities

The life cycle of a data breach begins when the attacker discovers a security vulnerability in the IT infrastructure to be attacked. Once the hacker has located the security flaw, that is to say the weak point of the network, he moves on to determining the attack strategy. The reconnaissance phase often involves targeting resources that can open multiple doors for the attacker within the network, such as credentials, sensitive personal data, and financial information.


Create an attack strategy

The basic strategy in security breach cases that lead to data disclosure is based on system access. This is usually done by intercepting the credentials of a user who has access to the network or by infecting authentication protocols with malware. The strategy phase is highly dependent on the data obtained during the recognition of security vulnerabilities.


Identifying the right tools for system access

The objective of the attack, which is achieved by hijacking login credentials, malware login or other attack vector, is to take control of the system for a long time without being noticed. By using any of the attack vectors mentioned above, the cyber attacker has the ability to penetrate deeper into the IT infrastructure and disrupt it as soon as they enter the system.


On the way to the target

By targeting unconfigured IT infrastructure with advanced cybersecurity protocols, the attacker can easily reach their target using the right attack vectors. Usually the goal is to make money or disrupt the continuity of the business model. Ransomware attacks can target both.


Damage assessment

The longer the data breach lifecycle, the harder it becomes to detect damage. As the cycle lengthens due to delays in detecting data breaches, more data may be leaked and greater financial losses may be incurred.

According to a recent study, it takes an average of 277 days worldwide to detect a data breach. Of that time, 207 days are related to the detection of the data breach, while 70 days are spent trying to contain the breach.

One of the conclusions of the same study relates to the life cycle cost of a data breach. Even a life cycle of less than 200 days costs an average of US$3.74 million. The longer the cycle time, the higher the cost.


PAM solutions: high efficiency in the detection of data breaches

Mastering the entire IT infrastructure and setting up a strict control mechanism based on the 24/7 principle is the best way to detect data breaches. To perform these functions, an advanced cybersecurity protocol is required. This is where Privileged Access Management (PAM) solutions come in.

PAM solutions allow organizations to take advantage of an advanced control mechanism for their IT infrastructure. By enabling control of access to these areas by auditing all entities with batches of sensitive data, including the database, PAM also does a great job of preventing breaches that can result from user error on the network.

Our Privileged Access Management (PAM) product, Single sign-on, also combines access control and data security applications to reduce the risk of data breach through its advanced modules. Let’s take a look at the Single Connect modules:

  • Privileged session manager: Tracks and records the activities of privileged accounts with access to critical data. Facilitates centralized management and control of all sessions.
  • Dynamic password checker: With its password vault feature, it isolates passwords from authorized network users and prevents password sharing.
  • Two-factor authentication: Verifies privileged users with various verification mechanisms such as time and geographic location characteristics.
  • Data Access Manager: Monitors and logs all critical data areas, including database, and administrator actions on the system.
  • Dynamic Data Masking: Prevents data leaks by displaying existing data as hidden information instead of real sensitive information.
  • Automation of privileged tasks: By automating critical tasks, eliminates human errors and achieves high efficiency.

contact us today to effectively mitigate data breaches against cyber threats and to learn all the details on how to integrate our PAM solution into your company’s IT infrastructure.

]]>
After a stroke in the brain of an infant, right side https://metroresearch.org/after-a-stroke-in-the-brain-of-an-infant-right-side/ Tue, 11 Oct 2022 02:04:28 +0000 https://metroresearch.org/after-a-stroke-in-the-brain-of-an-infant-right-side/ WASHINGTON – A clinical study by researchers at Georgetown University Medical Center found that for children who had suffered a major stroke in the left hemisphere of their brain within days of birth, the infant brain was sufficiently ” plastic” so that the right hemisphere acquires the linguistic abilities usually handled by the left side […]]]>

WASHINGTON – A clinical study by researchers at Georgetown University Medical Center found that for children who had suffered a major stroke in the left hemisphere of their brain within days of birth, the infant brain was sufficiently ” plastic” so that the right hemisphere acquires the linguistic abilities usually handled by the left side while also retaining its own linguistic abilities.

The left hemisphere of the brain is normally responsible for sentence processing (understanding words and sentences as we listen to speech). The right hemisphere of the brain is normally responsible for processing the emotion of the voice – is it happy or sad, angry or calm. This study aimed to answer the question “what happens when one of the hemispheres is injured at birth?”

The findings appear in PNAS the week of October 10, 2022.

The participants in this study developed normally during pregnancy. But around birth, they had a major stroke, which would have debilitating consequences in adults. In infants, a stroke is much rarer, but occurs in about one in four thousand births.

The researchers studied perinatal arterial ischemic stroke, a type of brain injury that occurs around the time of birth in which blood flow is cut off to part of the brain by a blood clot. The same type of stroke occurs much more frequently in adults. Previous studies of brain injury in infants have included multiple types of brain injury, but the focus in this study on one specific type of injury allowed the authors to find more consistent effects than in previous work.

“Our most important finding is that brain plasticity, specifically the ability to reorganize language to the opposite side of the brain, is certainly possible early in life,” says Elissa Newport, Ph.D., director of the Brain Plasticity and Recovery Center at Georgetown Medical Center, a professor in the departments of Neurology and Rehabilitation Medicine and first author of this study. “However, this early language plasticity is limited to one region of the brain. The brain is not able to reorganize injured functions anywhere, because more drastic reorganization is not possible even in early life. This gives us great insight into areas we could focus on for potential breakthroughs in the development of recovery techniques in adults as well.

The investigators recruited people from across the United States who all had moderate to large strokes in the cortex region of their left hemisphere at the time of birth. To assess the long-term results of their language abilities, participants took language tests between the ages of 9 and 26 and were compared to their nearby healthy siblings. They were also scanned in an MRI to reveal which areas of the brain were involved in understanding the sentences.

The participants and their healthy siblings all performed the language tasks almost perfectly. The main difference was that stroke participants processed the sentences on the right side of the brain while their siblings processed sentences on the left side. Stroke participants showed a very consistent pattern of language activation in the right hemisphere, regardless of the extent or location of stroke damage to the left hemisphere. Only one of the 15 participants, who had the smallest stroke, did not show clear dominant right-hemisphere activation.

“It is also remarkable that many years after their stroke, our participants are all highly functioning adults. Some are honor students and others are working or have graduated with a master’s degree,” says Newport. “Their achievements are remarkable, especially since some of their parents were told when they were born that their strokes would lead to lifelong disabilities.”

In future studies, researchers hope to better understand why the left hemisphere consistently becomes dominant in healthy brains, but consistently loses to the right hemisphere when there is a large left-hemisphere stroke. An additional question of particular interest – and clinical importance – is why left hemisphere language can successfully reorganize to the right hemisphere if injuries occur very early in life but not later. Research on stroke recovery and grief treatment in adults suggests that plasticity shrinks with age, which Newport hopes to study as it could be of great benefit and potential therapeutic interest for adults. adult stroke survivors.

###

The researchers are very grateful to the participants and their families who made invaluable contributions to this work.

Besides Newport, other Georgetown University authors include Anna Seydell-Greenwald, Barbara Landau, Peter E. Turkeltaub, Catherine E. Chambers, Kelly C. Martin, and Rebecca Rennert. Margot Giannetti and Alexander W. Dromerick are at Georgetown University and MedStar National Rehabilitation Hospital. Rebecca N. Ichord is at the University of Pennsylvania Perelman School of Medicine and Children’s Hospital of Philadelphia. Jessica L. Carpenter is at the University of Maryland in Baltimore. William D. Gaillard and Madison M. Berl are at Children’s National Hospital and Center for Neuroscience, Washington, DC.

This work was supported by funds from Georgetown University and MedStar Health; by the Solomon James Rodan Pediatric Stroke Research Fund, the Feldstein Veron Innovation Fund and the Bergeron Visiting Scholars Fund at the Center for Brain Plasticity and Recovery; by American Heart Association grant 17GRNT33650054; by NIH grant P50HD105328 to DC-IDDRC of National Children’s Hospital and Georgetown University; and by NIH grants K18DC014558, K23NS065121, R01NS244280, and R01DC016902.

Newport reports having no personal financial interests related to the study.

About Georgetown University Medical Center

As a premier academic health and science center, Georgetown University Medical Center synergistically delivers excellence in education – training physicians, nurses, health administrators and medical professionals. health professionals, as well as biomedical scientists – and cutting-edge interdisciplinary research collaboration, strengthen our capacity for basic science and translational biomedical research to improve human health. Patient care, clinical research and education are conducted with our academic health system partner, MedStar Health. GUMC’s mission is carried out with a strong emphasis on social justice and a dedication to the Catholic and Jesuit principle of cura personalis – or “care for the whole person”. GUMC includes the School of Medicine, School of Nursing, School of Health, Biomedical Higher Education, and Georgetown Lombardi Comprehensive Cancer Center. Designated by the Carnegie Foundation as a doctoral university with “very high research activity”, Georgetown is home to a clinical and translational science award from the National Institutes of Health and a comprehensive cancer center designation from the National Cancer Institute. Connect with GUMC on Facebook (Facebook.com/GUMCUpdate) and on Twitter (@gumedcenter).


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of press releases posted on EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

]]>
Book bans are part of a coordinated assault on public education https://metroresearch.org/book-bans-are-part-of-a-coordinated-assault-on-public-education/ Sat, 08 Oct 2022 16:00:00 +0000 https://metroresearch.org/book-bans-are-part-of-a-coordinated-assault-on-public-education/ The BDN Opinion section operates independently and does not set newsroom policies or contribute to the writing or editing of articles elsewhere in the journal or on www.bangordailynews.com. Jonathan Friedman is Director of Free Expression and Education Programs at PEN America and the lead author of “Banned America: The Growing Movement to Censor Books in […]]]>

The BDN Opinion section operates independently and does not set newsroom policies or contribute to the writing or editing of articles elsewhere in the journal or on www.bangordailynews.com.

Jonathan Friedman is Director of Free Expression and Education Programs at PEN America and the lead author of “Banned America: The Growing Movement to Censor Books in Schools.” This column was produced by Progressive Perspectives, which is run by The Progressive magazine and distributed by Tribune News Service.

Over the past year and a half, young people and educators have witnessed a growing campaign to silence the voices in schools across our country. Districts across the country are banning books with unprecedented frequency, directly compromising students’ freedom of learning. This movement has gained momentum thanks to local and national advocacy groups, many of which have conservative leanings, as well as political pressure from elected officials.

PEN America, where I lead free speech and education programs, recently released research documenting the extent of this threat in great detail. During the 2021 to 2022 school year — from July 2021 to June 2022 — nearly 140 school districts in 32 states issued more than 2,500 book bans.

]]>