caiiqjzggx4jruou!1200

Is this a good thing if genetic testing will tell you what’s going on in the future?

According to Futurism, genetic testing at home is becoming more commonplace and there is more data on various diseases, but is it really helpful to know from the moment we are born that we may have any future disease?

In the past few decades, the booming of genetic testing of families has provided us with more insight, such as who we are, where we come from, and what the future will affect our health. It also provides science with a great deal of data of invaluable value.

Millions of people around the world are sampling their faces in their living rooms in the hope of enhancing their understanding of family health. With the genetic maps of these voluntary test subjects, researchers are beginning to understand more closely the long-disregarded medical insights, let alone helping find new ones.

Researchers are also using the data to improve or develop entirely new treatments for diseases that were previously considered unsustainable by the scientific community. From there, the prospect of cure for incurable disease seems to be closer to reality than ever before.

The more genetically-tailored “direct to consumer” genetic testing is available to the public, the more data researchers have available. All of this data will surely bring additional investment. With these investments, we will surely achieve even faster progress in combating the disease.

Disease Report Card

We are approaching an era when newborns are sent home not only by the hospital’s blankets and knit hats, but also by parents in a cost-effective way to get accurate gene maps of their children. Amit Khera, a cardiologist and researcher at the University of Cambridge in Massachusetts, calls it a genetic score or, more generally, a “gene report card.”

In an interview recently, Kola said: “At a very young age, you basically get a report card that shows that you have 10 types of illness and gives you a rating, for example, you have 90 % Chance of developing heart disease, 50% chance of developing breast cancer, and 10% risk of developing diabetes. ”

Imagine this: It’s like a child’s health risk road map. It does not matter only for the coming months, or for the next few years, but it is about the child’s entire life cycle.

As we find ourselves more connected to genes, from the consistency of the waxy to the personalized quirks to the change in taste, we may be able to predict more about a child’s health than just their Risk of heart disease during growth.

Due to lifestyle, environmental and other factors, the genetic risk of many diseases can be alleviated. Although the risk of developing type 2 diabetes can indeed be changed, as we have learned, there are more than one or two genes that we need to focus on. The genetic reason behind some diseases is not dozens, but hundreds.

Is it worth the risk?

There are thousands and thousands of different genes in the human genome. When it comes to risk assessment, the genes we know can in some cases balance the effects of other genes. These predictions will become more accurate as more genes are identified associated with a particular condition.

Improving risk assessment is not only important for new born babies but equally important to the rest of us, and some may be more vulnerable than we are aware of.

So, the question is: what can we do with these disease prediction messages? What should medical professionals do with this information? If this risk can be reduced by changing the diet, taking medications, quitting smoking, or even wearing a fitness tracker, that information can be very timely. However, if the risk is inevitable, can we predict whether the disease will be a blessing or a curse?

This debate is particularly important in predicting neurological disorders such as Alzheimer’s disease. If a genetic test shows that someone is at risk for Alzheimer’s disease and even gives that person information about when symptoms may begin to develop, you can give them more time to prepare. However, how does this information affect one’s life?

If an adult is told he is going to have a disease in the next 10 years, they may be grateful because he has time to arrange work and family and to seek specific care based on the needs of the illness. But what if someone got the information when they were 25 years old? Or if they did not know they were going to develop Alzheimer’s on the day they were born, they would have to bear the burden of their entire lives Spend it?

To a large extent, these problems are determined by the rate of effective treatment of the disease detected by genetic testing. Perhaps, when we received a “gene report card” during childhood, the treatment of this age-related technology has been developed, which may mean that our risk score does not bother people at all. But from now until then, there will be many uncertainties as our understanding of health goes beyond our ability to fight against those fates.

caiiqjzggx4jruou!1200

NASA’s new rocket or the first flight in 2020, the investment is 46 times heavier Falcon

With the successful launch of the recent Falcon heavy rockets, Elon Musk’s SpaceX Company seems to have become a leader in aerospace. Although Musk’s aerospace company achieved impressive results, NASA did not stop the pace of space exploration and space exploration.

NASA started research and development of the space launch system (SLS) in 2010, and the SLS rocket system will be the most powerful rocket ever made and of course include Falcon heavy rockets. NASA is in the process of upgrading the retired shuttle’s RS-25 engine to power SLS rockets. NASA tested one of its engines on February 21 and achieved a thrust of 113% of its rated thrust.

This means that the upgraded RS-25 engine performance has achieved a staggering 13% improvement, completely breaking the limits of NASA when designing this engine decades ago. According to NASA, the ignition test on February 21 also tested the RS-25’s flight controller and a 3D printed engine assembly.

NASA said after a successful test: “Each RS-25 engine test brings us closer and closer to returning to deep space exploration and to space targets such as the Moon or Mars.” But the SLS system does not do this sort of thing The only hope is that SpaceX’s Falcon heavy rockets are also capable of these tasks, and it has been able to enter space.

However, there are many differences between the two rocket launch systems. SLS rockets will be even higher, reaching 97 meters, more than 70 meters tall falcon heavy rockets. From the point of view of load capacity design, SLS rockets also have a slightly stronger ability to carry low Earth orbit loads, with a payload capacity of 77 tons and Falcon heavy rockets with a payload capacity of 70 tons. However, according to NASA, future improvements are likely to bring SLS rockets to an astonishing 130 metric tons.

Musk past Falcon heavy rocket launch press conference had announced that Falcon heavy rocket research and development costs about 500 million US dollars. According to a report released by NASA’s Office of Supervision in April last year, NASA’s investment in SLS rocket projects will reach about 23 billion U.S. dollars by the end of this year. Falcon heavy rocket can be recycled, but the SLS rocket is not recovered, which will have an impact on the future launch costs.

In support of the SLS launch, NASA is revamping a launch tower designed for other rockets. That tower already cost nearly $ 1 billion from NASA and could require further upgrades in the future. It may only be able to use once, which requires NASA to invest in the transformation of other towers to prepare for future SLS launch.

NASA has repeatedly delayed the launch date of the SLS rocket system, but in November 2017, NASA announced its first launch in 2020. The first mission of the SLS, the # 1 exploration mission, will fly an unmanned spacecraft around the moon and the future of the SLS rocket launch system is expected to explore the moon, Mars and beyond.

However, taking into account the increasing budgetary pressure on NASA, the rocket launch date in 2020 is also likely to change. But when the SLS rocket system truly completes its maiden voyage, it will lead a completely new aviation era and will also ensure that NASA has long been a leader in aviation.

caiiqjzggx4jruou!1200

Ali senior technical experts: how to maintain system stability in double 11 trillion flow?

Tair history

Tair is widely used in Alibaba, whether it is Taobao Lynx Browse Orders, or open Youku browse play, behind the Tair figure silently support huge traffic. Tair’s history is as follows:

  • 2010.04 Tair v1.0 officially launched @ Taobao core system;
  • 2012.06 Tair v2.0 Launches LDB Persistence Products to Meet Persistent Storage Requirements;
  • 2012.10 Launched RDB cache product and introduced Redis interface to meet the storage requirements of complex data structures.
  • 2013.03 Reducing the lead-in time and access delay on the on-line Fastdump products on the basis of LDB on the basis of LDB import;
  • 2014.07 Tair v3.0 officially launched, performance several times increase;
  • 2016.11 Thai intelligent operation and maintenance platform on the line, helping 2016 double 11 into the billions of times;
  • 2017.11 Performance Leap, Hot Hash, Resource Scheduling, Supporting Trillions of Traffic.
  • Tair is a high-performance, distributed, scalable, and reliable key / value fabric storage system! Tair features are mainly reflected in the following areas:
  • High Performance: Guaranteed Low Latency at High Throughput, Tair is one of the largest systems in the Alibaba Group, with double 11 calls at 500 million spikes per second with an average latency of less than 1 millisecond;
  • High availability: Through the automatic failover, current limiting, auditing and computer room disaster and multi-unit multi-area, to ensure that the system can work under any circumstances;
  • Large-scale: distributed data centers around the world, Ali BU each BU are in use;
  • Business coverage: e-commerce, ants, one, rookie, high German, Ali health and so on.
  • Tair in addition to the common Key / Value system provides functions such as get, put, delete, and bulk interfaces, there are some additional practical features, making it a wider application scenarios. Tair application scenarios include the following four:
  • MDB Typical Application Scenario: Used for caching and reducing the pressure on the back-end database. For example, the products in Taobao are cached in Tair. For temporary data storage, some data loss will not have a big impact on the service, for example, login ;
  • LDB typical application scenarios: common kv storage, transaction snapshot, security control, etc .; storage of black and white single data, read qps high; counter function, the update is very frequent, and the data can not be lost.
  • Typical RDB scenarios: Caching and storing complex data structures, such as playlists, live broadcasts, etc.
  • FastDump Typical Application Scenario: Periodically offline data is quickly imported into the Tair cluster, the rapid use of new data, the requirements for online read very high; read low latency, can not have glitches.

Double 11 challenge how to do?

As shown in the figures for 2012-2017, the GMV was less than 20 billion in 2012, 168.2 billion GMV in 2017, the peak of transaction creation was from 14,000 to 325,000 and the peak QPS was increased from 13 million to nearly 500 million.

As can be seen from the figure, tair visit growth rate far greater than the transaction to create the peak, the transaction is also greater than the peak to create GMV growth. At 0:00, for Tair, the challenge is how to ensure low latency and how to ensure that costs are lower than business growth.

For distributed storage systems, hot issues are more difficult to solve. The caching system traffic is particularly large, hot issues are more prominent. 2017 Double 11, we passed the hot hash, completely solve the problem of cache hot spots.

At the same time, in order to carry 325,000 transactions per second, Ali’s technology architecture has evolved into a multi-geo-multi-cell architecture, not only using the unit on Aliyun, but also has a mix of offline services unit, where the challenge for us How to quickly and flexibly deploy and off-site cluster.

Multi-regional multi-unit

Look at our general overall deployment architecture and tair in the system position. As you can see from this diagram, we are a multi-site multi-cell multi-cell deployment architecture. The entire system from the traffic access layer, to the application layer. Then the application layer relies on a variety of middleware, such as message queues, configuration centers and so on. The bottom is the underlying data layer, tair and database. At the data level, we need to do the required data synchronization for the business to ensure the top-level business is stateless.

Multi-geo-multi-cell In addition to preventing black swans, another important role is to be able to achieve the flow of the bearer section by going online one unit quickly. Tair has also made a complete set of control system to achieve fast and flexible station building.

Flexible station

Tair itself is a very complex distributed storage system, the scale is very large. So we have an operation and management platform In this process through the task scheduling, task execution, validation and delivery processes to ensure fast one-click station, from the fast mixing fast cluster on the work. After the deployment is completed, it will undergo a series of system, cluster, instance connectivity verification to ensure that the service is complete and then delivered on-line use. If there is a hint of omission, a large-scale failure may be triggered when business traffic comes up. In this case, in the case of a persisted cluster with data, after the deployment is completed, it is also necessary to wait for the migration of the stock data to complete and the data to be synchronized before entering the verification phase.

Each Tair’s business cluster has a different water level. For each full link measurement before double 11, the Tair resources used will change as the business model changes, resulting in changes in water levels. In this case, we need to suppress Tair resources scheduled across multiple clusters each time. If the water level is low, some of the machine’s server resources will be moved to the water level to reach the value of all cluster water levels.

data synchronization

Multi-geographic multi-unit, we must be able to do the data layer to achieve data synchronization, and can provide a variety of business read and write modes. For the unitized business, we provide the unit with the ability to access local Tair, and for some non-unitized businesses, we also provide a more flexible access model. Synchronization delay is something we have been doing. In 2017, with double-digit synchronization data of 10 million units per second, how to better solve the problem of non-unitized data write conflicts in multi-cell units? This is what we have always considered.

Performance optimization costs down

Server costs do not fall linearly at 30% or 40% per year as traffic increases linearly. We achieve this goal primarily through server performance optimization, client performance optimization, and different business solutions.

First look at how we from the server side to enhance performance and reduce costs. Here’s the work is divided into two major pieces: one is to avoid thread switching scheduling, reducing lock competition and lock-free, the other is to use the user state protocol stack + DPDK to run-to-completion in the end.

Memory data structure

We will apply for a large chunk of memory after the process has started, and format the formatting in memory. The main slab allocator, hashmap and memory pool, the memory will be filled after the LRU chain data obsolete. As the number of server CPUs continues to grow, it is difficult to improve overall performance if the lock contention is not well managed.

By referring to the various references, combined with tair’s own engine needs, we used fine-grained locks, lock-free data structures, CPU native data structures and RCU mechanisms to improve engine parallelism. The figure on the left shows the CPU consumption graph of each function module without optimization. You can see that the network part and the data search part consume the most. After optimization (right), 80% of the processing is on the network and data lookup, which is in line with We expect.

User mode protocol stack

Lock optimization, we found that a lot of CPU consumption in the kernel state, then we use DPDK + Alisocket to replace the original kernel state protocol stack, Alisocket uses DPDK in the user mode to receive the card, and use its own protocol stack to provide socket API , To integrate it. We compare tair, memcached and industry-leading seastar frameworks with performance gains of over 10% on seastars.

Memory consolidation

When performance increases, the amount of memory used by a unit of qps becomes less, so memory becomes scarce. Another status quo, tair is a multi-tenant system, each business behavior is not the same, often result in the page has been assigned, but many pages in slab are not full. A small number of slabs are indeed fully occupied, resulting in the appearance of capacity, but unable to allocate data.

At this point, we implemented a function that merges unused page memory in the same slab, freeing up a lot of free memory. Can be seen from the figure, in the same slab, record the usage of each page, and mount to different specifications of the bucket. The merger, the use of low page to high page usage merger. Also need to be associated with each data structure, including the LRU chain, equivalent to the entire memory structure of the reorganization. This feature is particularly effective in public clusters on the Internet, according to different scenarios, you can significantly improve the memory usage efficiency.

Client optimization

These are server-side changes, then look at the client’s performance. Our client is running on the client server, so take up the client’s resources. If we can reduce resource consumption as low as possible, for our entire system, the cost is favorable. Clients have done two aspects of optimization: network framework replacement, adaptation coroutine, from the original mina to netty, the throughput increased by 40%; serialization optimization, integrated kryo and hessian, the throughput increased by 16% +.

Memory grid

How to reduce overall Tair and business costs in combination with business? Tair provides multiple levels of storage integration to solve business problems, such as security risk control scenarios, read and write large, there is a large number of local computing, we can store the business machine in the local machine to access the data to be accessed, a large number of readers will hit the local , And writes can be merged over a period of time, after a certain period of time, the merged writes to the far-end Tair cluster as the final storage. We provide read-and-write penetration, including merger writing and the ability to have multiple copies of the original Tair itself, reducing the reading of Tair to 27.68% and the writing of Tair to 55.75% at 11am.

Hot problem has been solved

Cache breakdown

Cached from the beginning of a single point of development to distributed systems, organized by data sharding, but for each data shards, or as a single point exists. When there is a big promotion or hot news, the data is often on a slice, which will result in a single point of access, and then cache a node will not be able to withstand such a lot of pressure, resulting in a large number of requests no way to respond. A caching system is a self-protection method is limited. But the current limit for the entire system, does not work. After the current limit, part of the flow will go to the database, it still just said can not afford the same result, the entire system is abnormal.

So here, the only solution is that the caching system can serve as the endpoint for traffic. Whether big promotion, or hot news, or the business of their own anomaly. Cache can absorb these traffic out, and let the business to see the hot situation.

Hot hash

After a variety of programs to explore, using a hot hash scheme. We have evaluated the client-side local cache scheme and the secondary cache scheme, and they can solve the hot issue to a certain extent, but each has its own drawbacks. For example, the number of secondary cache servers can not be estimated, and the impact of the local cache scheme on the service-side memory and performance. The hot hash directly on the data node plus hotzone area, hotzone bear hot data storage. For the entire program, the key is the following steps:

Intelligent Recognition. Hot data is always changing, or frequency hot spots, or traffic hot spots. The internal implementation uses a multi-level LRU data structure and sets different weights to be placed on different levels of LRUs. Once the LRU data is full, the LRU chains will be eliminated from the low-level LRU chains to ensure that the high weights are reserved.

Real-time feedback and dynamic hashing. When visiting the hotspot, the appserver and the server will be linked together and dynamically hashed to other data hotZones according to the preset access model. All nodes in the cluster will take this function.

In this way, we will be the original single-point access to bear the traffic through some machines in the cluster to bear.

The whole project is very complex, hot hash has achieved a very significant effect in double 11. Peaks absorb more than 800 w of traffic per second. As you can see from the diagram to the right, the red line is the water level of the hot hash if the hot water level is not turned on and the green line is the hot water level. If not, many clusters exceed the death level, which is 130% of our cluster level. When turned on, the water level drops below the safety line by hashing the hotspots throughout the cluster. In other words, if not turned on, then many clusters may have problems.

Write hot

Write hotspots and hotspots have similarities, this is mainly through the merger write operation to implement. The first is still to identify the hot spots, if it is a hot write operation, then the request will be distributed to a special hot-combined thread processing, the thread based on the write request for a certain period of time the merger, followed by the timing thread in accordance with the default consolidation Cycle the merged request submitted to the engine layer. In this way to significantly reduce the pressure on the engine layer.

After a double 11 test on the reading and writing hot deal, we can safely say that Tair caching, including kv storage read and write hot spots completely resolved.

 

caiiqjzggx4jruou!1200

Google said it is studying various forms of AR but the technology will take years to mature

Google introduced ARCore at the 2018 Mobile World Congress and highlighted how it supports 100 million Android devices. However, Rick Osterloh, director of hardware at Google, said in a media interview that the company is studying various forms of augmented reality beyond mobile phones but no upcoming new products because the technology also needs A few years to mature.

In an interview with The Telegraph, Osterlo said Google is researching areas beyond mobile-based augmented reality that are very interesting.

He confirmed that Google is “doing a lot of research” and “constantly studying various forms of augmented reality applications.”

However, consumers should not expect the company to release new products in the short term. According to Ostero, the technology will take years to mature. Until then, Google will build infrastructure around it, this week with the release of ARCore version 1.0 is the best example. Developers are finally able to release augmented reality apps at the Play Store app store.

Osterlo said: “These technologies take some time to mature, about a few years and we will invest in this area for a long time, this technology is far from the expected value of the people are still some distance.”

When Austell joined Google in 2016, he was appointed to the company’s hardware business, including the Google Glass project, which later changed its name to Project Aura. In fact, Ivy Ross, head of the Aura project, was later named head of Google’s hardware design.

However, Google still launched last year’s Enterprise Edition Google Glass Glass Enterprise Edition. Before 2015, there were rumors that the department is developing two audio devices that use bone conduction technology.

In addition to Android and ARCore, Google also invested in Magic Leap, which recently unveiled its first hardware product of 2018.

caiiqjzggx4jruou!1200

Coinbase last year revenues 1 billion US dollars of which 43% came from in December

Coinbase last year revenues 1 billion US dollars of which 43% came from in December

36 Krypton friends • 2018-02-28 • Blockchain
Coinbase said in January it expects to earn more than 600 million U.S. dollars this year, but with more than $ 1 billion in revenue in 2017, helped by a wave of gains during Thanksgiving and Christmas last year.

According to foreign media reports, Coinbase, a digital currency trading and wallet service platform founded in 2012 backed by Silicon Valley investors, said the company has realized a revenue of 1 billion U.S. dollars in 2017. Yet another independent analysis by Superfly Insights reports that Coinbase’s revenue also started to plummet after that when Coinbase’s revenue accounted for 43% of its revenue for the full year, while the bitcoin price soared in December last year.

“Their magic has not continued,” said Jonathan Meiri, chief executive of Superfly Insights, after analyzing Coinbase’s data. “Although there was a sharp rise in December, signs of collapse began in January and February.”

The analysis of Superfly Insights Company is based on the income and aggregate data of anonymous email, the data come from the data of 25,000 users in the whole of 2017 and the first six weeks of 2018. The data is collected through Productivity Applications, Personal Finance Applications and Expense Management applications. Superfly Insights typically provides clients with this type of data and analysis, including hedge funds, banks and venture capital firms. Meri said Superfly Insights also provided data and analysis to KPMG. “Of the top five taxi applications in the world, three use our data.” Meri said data collection is limited to the terms of each application or service, and the tracking process complies with strict privacy laws in Europe as well Follow normal data protection regulations.

Coinbase said in January it expects to earn more than 600 million U.S. dollars this year, but with more than $ 1 billion in revenue in 2017, helped by a wave of gains during Thanksgiving and Christmas last year. Taking into account the madman in 2008 Bitcoin extreme rise in prices, this performance is reasonable.

Nicolas Christin, a Carnegie Mellon University professor, said Superfly Insights’ data is more reliable after he traced digital currency through the well-known Silk Road website. However, Kristen believes rigorous verification of bitcoin blockchain technology is extremely difficult, as platforms like Coinbase trade exclusively through bitcoin. Coinbase now refuses to respond to this.

It is not surprising that Coinbase’s revenue dipped as the cryptocurrency market, especially Bitcoin, rose, and the New York Times called Coinbase “the heart of bitcoin fanatics.” Coinbase charges an agency fee for a Bitcoin transaction, depending on the user’s location and currency. In December last year, bitcoin prices soared from $ 11,000 to $ 19,000 and then dropped sharply to $ 13,000. Just that month, Coinbase’s servers experienced multiple downtime because of the unpleasant impact of high traffic. According to the Times, the peak traffic to Coinbase servers in December last year was double the previous highest record and eight times that of June last year.

Higher prices and larger volumes mean that Coinbase charges higher transaction fees. This record will not be able to be refreshed unless the price of Bitcoin soar in 2018. “Unless the rally resumes, otherwise, it is very difficult to exceed the revenue of 2017.” Meri said. “It’s not easy to reach the same revenue level.” Of course, many cryptocurrencies predict that a new wave of market increases will occur in 2018, and some of these experts have deep roots in Coinbase. “Bitcoin will soar to $ 50,000 by December,” said Thomas Glucksmann, director of marketing for cryptocurrency trading platform Gatecoin, in an interview with CNBC.

Coinbase is generally considered the darling of the wave of cryptocurrencies, the company first sniffed out business opportunities in this area. For a long time, ordinary people had hard-pressed access to cryptocurrencies, except to find out how to “dig and coin” and to find someone who was willing to buy or sell cryptocurrencies. Coinbase is the first service platform that supports the purchase, storage of bitcoin and other digital currencies via bank transfers and credit cards. Brian Armstrong, chief executive of Coinbase, not only received media interviews but also meetings, in stark contrast to the mysterious, anonymous trader of the bitcoin 1.0 era. At the same time, Silicon Valley of Coinbase has its own legitimacy. By the end of 2017, Coinbase’s overall performance was excellent, despite frequent customer service problems and lawsuits with the IRS. The New York Times has said that Coinbase already has a two-story office in San Francisco.

However, Coinbase’s competitors are pressing harder behind, and now investors have many different options to buy cryptocurrencies. In 2018, Robinhood, which includes Square and the stock exchange, added support for cryptocurrencies, and both said they would not charge service fees for cryptocurrencies, putting a heavy strain on Coinbase.

In the survey, Superfly Insights also found that the revenue share of Coinbase has changed dramatically in a year. In early 2017, Bitcoin transactions accounted for 90% of Coinbase’s total transaction volume, with an average payment of $ 483. A year later, bitcoin trading is less than half the deal at Coinbase, as Coinbase has also started supporting transactions in other cryptocurrencies, such as Ethereum and Litecoin. At the same time, Superfly Insights also found that Coinbase users seldom sell their bitcoin, and once the user does that, the price is usually three times the average price, reaching $ 1,393.

Meri pointed out in the report: “I’m curious how Coinbase builds a recurring profit model, especially given the low average return on users. People buy Coinbase and accumulate a certain amount of money, and that’s where the problem lies Every single day I have to look at the price of bitcoin, what struck me is that there is very little that can be done.

According to the analysis of Superfly Insights, Coinbase’s income changes reflect the instability and imbalance of the cryptocurrency world, which is still in its infancy. Coinbase may be able to increase revenue through other business platforms and may even convince merchants to accept digital currency payments. Because if more businesses accept the use of cryptocurrencies to pay, then the price of these currencies will go up. It depends on whether Coinbase can convince more companies to join the world of digital money. Meri said: “Coinbase needs the business world to help become part of the business deal, but the fact is that most of the businessmen are not interested in relatively small bitcoin or other cryptocurrencies.”

Coinbase has now raised more than 225 million dollars from investors, and the company valued at 1.6 billion U.S. dollars after the most recent round of financing, partly because of the explosive growth and volatility in bitcoin prices.

caiiqjzggx4jruou!1200

Last year, miners bought 3 million GPU AMD into the biggest winner

Digital currency mining continues to support chip makers, with miners contributing $ 776 million in chip sales in 2017. The market share of major GPU supplier AMD rose significantly in the last quarter, the market research company estimated, the chip price will not glide in a period of time in the future.

Mining continues to support the sales of chip makers, AMD also take this to improve their market share.

Market research firm Jon Peddie Research released the latest chip industry report, revealing the fourth quarter of last year, graphics processor (GPU) shipments and market share of various vendors. AMD’s market share in 2017 increased significantly, from 14% in the third quarter of last year rose to 14.2%; both Intel and NVIDIA declined.

Jon Peddie also mentioned that unlike AMD or NVIDIA, Intel’s market share is less affected by digital currency because Intel integrates GPUs or iGPUs for sale on desktops or laptops, so the tide of digital currencies Ebb and flow of Intel’s GPU sales have little effect.

Shipments, GPU chips last year fell 4.8% year-on-year, desktop chip fell 2%, while the notebook fell 7%. However, digital money players supported the chip industry significantly. Last year, miners contributed more than 3 million unique graphics processors with sales of $ 776 million.

Fourth quarter shipments increased 8.08% QoQ, Intel dropped 1.98%, NVIDIA shipments fell as much as 6%.

Jon Peddie said that AMD is the most profitable chip maker in the mining tide. AMD is the main chip device for digital currency mining. With the digital currency soaring in the fourth quarter of last year, the demand for chips has also been significantly boosted.

The end of last month, AMD announced fourth-quarter earnings report, adjusted EPS reached 0.08 US dollars in the fourth quarter, exceeding market expectations of 0.05 US dollars. Before AMD released its earnings, analysts pointed out that the digital currency such as bitcoin late last year will not only boost the performance of AMD in the fourth quarter of last year, but also give good guidance to the performance of the first quarter of this year.

“The current situation is that miners are engulfing the entire graphics card supply and now the volume of production can not keep up with the huge demand.” Jon Peddie mentioned that miners have contributed more to GPU sales than the game, though the game Is the most important impetus GPU sales.

Last quarter of the second quarter of last year, the digital money mining industry has also pushed up the quarter GPU sales. However, in the third quarter, digital currency faced a tightening global regulatory regime. PC games also enjoyed strong growth and the impact of mining on GPU sales weakened.

However, Jon Peddie, president of Jon Peddie Research, said chip demand from miners may slow in the future as gamers who already own GPU chips can “partially mine” while not playing games, offsetting some of the demand, but ” The price will not decline in a period of time in the future “.