Wednesday, October 24, 2012

মৌলিক গবেষণায় কম্পিউটার (note)

গণিতে ফোর কালার থিওরেম, কেপলার"স কনজেকচার ইত্যাদি প্রমাণে কম্পিউটারকে ব্যবহার করা হয়েছে। এমনকি গাণিতিক প্রমাণকে ফরমাল লজিকের মাধ্যমে অটোমেটেড করার গবেষণাও অনেকদূর এগিয়ে গেছে। কিন্তু এ তো গেল গণিত। ভৌত বিজ্ঞানের মৌলিক আবিষ্কারেও কম্পিউটার গুরুত্বপূর্ণ অবদান রাখছে আজকাল। যেমন লার্জ হেড্রন কলাইডারের কথাই ধরা যাক।

এখনো পর্যন্ত, আমাদের জানা সকল ভৌত পর্য্বেক্ষণকে ব্যাখ্যা করার জন্য যে গাণিতিক মডেলের আশ্রয় নেওয়া হয় তাকে বলে "স্টান্ডার্ড মডেল"। এই মডেল হচ্ছে, কিছু সমীকরণের
avi wigderson

Let’s go back to Q1, and to what is perhaps the largest (the budget is several billion dollars) experiment, designed to further our understanding of time, space, energy, mass and more generally the physical laws of nature. We refer to the LHC (Large Hardon Collider) at CERN, Geneva, which should be operational in about one year. The outcomes of its experiments are awaited eagerly by numerous physicists. If these do confirm (or refute) such theories like supersymmetry, or the existence of the elusive Higgs particle (responsible for mass in the current prevailing theory, the standard model), this excitement will be shared with the public at large.
But how would they know? The LHC bombards protons against each other at enormous energies, and detectors of all kinds attempt to record the debris of these collisions. There are billions of collisions per second (and the LHC will operate for several years), so the total data detected is much, much larger than what can be kept. Moreover, only a few of these collisions (less than 1 in a million) provide new information relevant to the searches above (most information was already picked up by previous, less sensitive colliders, and helped build up the current theories). So ultrafast on-line computer decisions have to be made to decide which few information bits to keep! The people who wrote these programs have designed and implemented an efficient recognition device for new knowledge! Needless to say, the programs that would search and analyze the kept data (20-mile high stack of cd’s of it), would have to be designed to efficiently find something new. Ultimately, we would like programs that would analyze the data and suggest new models and theories explaining it directly.
But clearly, such huge investment of resources would never take place if we were not convinced that new phenomena, if observed, could be efficiently recognized. 

No comments:

Post a Comment