Smallest transistor intel11/20/2023 ![]() Streamlining includes using simpler processors and creating hardware tailored to specific applications, like the graphics-processing unit (GPU) is tailored for computer graphics. ![]() In terms of hardware architecture, the team advocates that hardware be streamlined so that problems can be solved with fewer transistors and less silicon. "This is a great opportunity for computing research, and the report provides a road map for such research."įor algorithms, the team suggests a three-pronged approach that includes exploring new problem areas, addressing concerns about how algorithms scale, and tailoring them to better take advantage of modern hardware. "Since Moore's Law will not be handing us improved performance on a silver platter, we will have to deliver performance the hard way," says Moshe Vardi, a professor in computational engineering at Rice University who was not part of the project. But in recent years multicore technology has enabled complex tasks to be completed thousands of times faster and in a much more energy-efficient way. ![]() Much existing software has been designed using ancient assumptions that processors can only do only one operation at a time. Instead, the researchers recommend techniques like parallelizing code. "We can't keep doing 'business as usual' if we want to continue to get the speed-ups we've grown accustomed to." "These are the kinds of strategies that programmers have to rethink as hardware improvements slow down," says Thompson. Credit: Massachusetts Institute of Technology SPECint (largely serial) performance, SPECint-rate (parallel) performance, and clock-frequency scaling for microprocessors from 1985 to 2015, normalized to the Intel 80386 DX microprocessor in 1985. It will be published in the next issue of Science, out this week.įig. Leiserson co-wrote the paper with research scientist Neil Thompson, professor Daniel Sanchez, adjunct professor Butler Lampson and research scientists Joel Emer, Bradley Kuszmaul and Tao Schardl. "If we want to harness the full potential of these technologies, we must change our approach to computing." "But nowadays, being able to make further advances in fields like machine learning, robotics and virtual reality will require huge amounts of computational power that miniaturization can no longer provide," says Leiserson, the Edwin Sibley Webster Professor in MIT's Department of Electrical Engineering and Computer Science (EECS). The inefficiency that this tendency introduces has been acceptable, because faster computer chips have always been able to pick up the slack. Leiserson says that the performance benefits from miniaturization have been so great that, for decades, programmers have been able to prioritize making the writing of code easier rather than making the code itself run faster. In a recent journal article published in Science, a CSAIL team identifies three key areas to prioritize to continue to deliver computing speed-ups: better software, new algorithms and more streamlined hardware. While we wait for the maturation of new computing technologies like quantum, carbon nanotubes, or photonics (which may take a while), other approaches will be needed to get performance as Moore's Law comes to an end. As a result, over the last decade researchers have been scratching their heads to find other ways to improve performance so that the computer industry can continue to innovate. For a long time, the smaller the transistors were, the faster they could switch.īut today, we're approaching the limit of how small transistors can get. Transistors, the tiny switches that implement computer microprocessors, are so small that 1000 of them laid end-to-end are no wider than a human hair. This miniaturization trend has led to silicon chips today that have almost unimaginably small circuitry. ![]()
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |