脾胃不好吃什么食物| 三百多分能上什么大学| 南浦大桥什么时候建成| 外交部长是什么级别| 三晋是什么意思| 胡萝卜富含什么维生素| 为什么要做肠镜检查| 吃什么容易得胆结石| 什么叫双飞| 耳鸣是什么意思| 狗狗吃胡萝卜有什么好处| 肠梗阻是什么意思| 女生做彩超是检查什么| 夏天空调开什么模式| 什么叫做原发性高血压| 光谱是什么| 发烧去医院挂什么科| hpv16是什么意思| 党内的最高处分是什么| 免冠照什么意思| 花生什么时候收| 什么东西能美白| 心衰是什么病| 夫妻肺片是什么肉| 什么叫阈值| 上火什么症状| 武夷山岩茶属于什么茶| 一什么瀑布| 避重就轻是什么意思| 什么植物好养又适合放在室内| 88什么意思| 咳黄痰吃什么药好得快| 月亮是什么意思| 风湿病是什么原因造成的| 血小板是干什么用的| 嘴唇痒边缘痒用什么药| 6月16号是什么星座| 光影什么| bf是什么牌子| 1月出生是什么星座| 毛发旺盛女生什么原因引起的| 结婚下雨有什么说法| 血酮体高代表什么| 执子之手与子偕老什么意思| 人质是什么意思| 什么是安全期和排卵期| 心功能三级是什么意思| 不来月经有什么危害| 什么伤医院不能治| 什么牛什么毛| 什么是热伤风| 虾和什么蔬菜搭配最好| 血白细胞高是什么原因| 1978年属什么生肖| 什么是修行| 什么是爱豆| 狗和什么属相最配| 木兮是什么意思| 腰椎间盘突出有什么症状| 外阴白斑瘙痒用什么药| 嗓子发炎吃什么水果| 牛腩是什么| 宝宝经常发烧是什么原因引起的| 脑供血不足什么原因| 乙肝五项245阳性是什么意思| 臭氧是什么味道| 伤官是什么| 生物冰袋里面是什么| 手上长斑点是什么原因| 看什么| 托腮是什么意思| 什么品种的西瓜最好吃| ncf什么意思| 嗓子疼吃什么药好得快| 吉士是什么| 县长属于什么级别| 葡萄酒中的单宁是什么| 胎动是什么感觉| 枇杷什么季节成熟| 什么粉可以代替木薯粉| 吃李子有什么好处| b族维生素是什么意思| 松树像什么| 耳机降噪是什么意思| 摩尔每升是什么单位| 纯牛奶什么时候喝最好| 屁很多是什么原因造成的| 嗪读什么| 舌头锯齿状是什么原因| 小孩一到晚上就发烧是什么原因| 为什么尿频繁怎么回事| 红萝卜和胡萝卜有什么区别| 和珅属什么生肖| 刻薄是什么意思| 什么是玄关| 启五行属什么| 胃酸过多是什么原因造成的| 黄芪什么人不能吃| 为什么会被限制高消费| 普贤菩萨保佑什么生肖| 壮字五行属什么| 防腐剂是什么| 没有宇宙之前是什么| 牙龈上火肿痛吃什么药| idc是什么意思| 冬是什么结构| 太字五行属什么| 慢性盆腔炎吃什么药| 骨折是什么感觉| 肾穿刺是什么意思| 吃什么通便效果最好最快| 5.22是什么星座| 最可爱的动物是什么生肖| 甲状腺肿是什么意思| 胃酸分泌过多吃什么药| 什么是槟榔| 外公是什么关系| 勿忘是什么意思| 甲状腺过氧化物酶抗体高说明什么| 尿里加什么能冒充怀孕| 小水滴会变成什么| 黑色素瘤是什么| 下午7点是什么时辰| 世五行属什么| 舌头溃疡是什么原因造成的| 皮疹长什么样| 赞聊是什么意思| 55年出生属什么| 胃息肉有什么危害| 石蜡病理是什么意思| 仓鼠和老鼠有什么区别| 188什么意思| 尿路感染去医院挂什么科| 客串是什么意思| 鸡胗炒什么菜好吃| 丙寅五行属什么| 生肖鸡和什么生肖最配| 吃小米粥有什么好处和坏处| 什么病会晕倒| 胆囊是干什么用的| 汗颜什么意思| 清远有什么好玩的| 属猪的本命佛是什么佛| 凤五行属性是什么| 什么是爱情| 04年属猴的是什么命| 712什么星座| 入珠是什么意思| spi是什么意思| 不可漂白是什么意思| 丙二醇是什么| 什么小吃最火爆最赚钱| 天秤女喜欢什么样的男生| 什么是性早熟| 杏色配什么颜色好看| 梦见前男友死了是什么意思| 面首是什么意思| 天津副市长什么级别| 男人时间短吃什么药好| 什么动物会冬眠| 大姨妈不来是什么原因造成的| 格桑花是什么花| 血尿是什么原因引起的| 2023年五行属什么| 手脚冰凉什么原因| 寓教于乐什么意思| 大姨妈不来是什么原因造成的| 87年什么命| 什么钓鱼愿者上钩| 乳腺无回声结节是什么意思| 免疫力低吃什么补| 腰痛吃什么好| 生理期量少是什么原因| 鱼加思读什么| 什么是洗钱| 十一月一号是什么星座| 爱新觉罗是什么意思| 三七花泡水喝有什么功效和作用| 山不转水转是什么意思| 放风筝是什么季节| 85年属什么生肖| 刹是什么意思| 陕西为什么叫三秦大地| 喜是什么意思| 无回声结节是什么意思| 艾滋病通过什么传染| 中性粒细胞高是什么感染| ab和b型血生的孩子是什么血型| 土豆可以做什么美食| 梦见走错路是什么意思| 杏不能和什么一起吃| 眼睛粘糊是什么原因| 手心朝上是什么意思| 讽刺是什么意思| 黑色柳丁是什么意思| 人为什么会衰老| 宅是什么意思| 颈椎退行性变是什么意思| 胎儿胆囊偏小有什么影响| 哥德巴赫猜想是什么| 西南方向是什么方位| 獐是什么动物| 吃什么容易消化| 安全期一般是什么时候| 梦见菜刀是什么意思| 高湛为什么帮梅长苏| 软化灶是什么意思| 痛风能喝什么饮料| 纳少是什么意思| bpd是胎儿的什么意思| 色觉异常是什么意思| 一什么事情| 肝火旺吃什么| 紫绀是什么症状| 回奶吃什么| 香菇配什么菜炒着好吃| 热退疹出是什么病| 血小板上升是什么原因| jeep衣服什么档次| 晴字五行属什么| 过敏性紫癜千万不能用什么药| 血压高什么不能吃| 14k金是什么意思| 女生吃什么可以丰胸| 国师是什么意思| 海阔什么| 睑腺炎是什么原因造成| 为什么会胎停| 什么是有机磷农药| 肺大泡是什么原因造成的| 为什么都开头孢不开阿莫西林| 孩子积食吃什么药| 十八大什么时候| 春光乍泄是什么意思| 暮雪是什么意思| 痛风什么东西不可以吃| 女人带貔貅有什么讲究| 小舌头学名叫什么| 感染幽门螺旋杆菌吃什么药| pc材质是什么| 印度是什么制度的国家| 检查梅毒挂什么科| 科级干部是什么级别| 公费是什么意思| 婊子代表什么生肖| 孕妇感冒可以吃什么药| 居住证是什么意思| 阴道是什么味道| 小白兔是什么意思| gopro是什么意思| 毛毛虫吃什么| 重楼别名叫什么| 10月2号是什么星座| 双侧基底节区腔隙灶是什么意思| 灵芝与什么相克| 牙齿根管治疗是什么意思| 眼皮黑是什么原因| 吃什么解油腻| 甲状腺炎有什么症状表现| 心绞痛吃什么药好| 运动后恶心想吐是什么原因| 待客是什么意思| 尿频挂什么科| 阳明病是什么意思| 百度Jump to content

马鞍山--安徽频道--人民网

From Wikipedia, the free encyclopedia
百度 GODVDPI较高,所以经常用4倍全自动压枪,那么各位观众老爷们平时都喜欢用什么倍镜呢?(来源:绝地求生小饭堂)

In computing, single program, multiple data (SPMD) is a term that has been used to refer to computational models for exploiting parallelism whereby multiple processors cooperate in the execution of a program in order to obtain results faster.

The term SPMD was introduced in 1983 and was used to denote two different computational models:

  1. by Michel Auguin (University of Nice Sophia-Antipolis) and Fran?ois Larbey (Thomson/Sintra),[1][2][3] as a "fork-and-join" and data-parallel approach where the parallel tasks ("single program") are split-up and run simultaneously in lockstep on multiple SIMD processors with different inputs, and
  2. by Frederica Darema (IBM),[4][5][6] where "all (processors) processes  begin executing the same program... but through synchronization directives ... self-schedule themselves  to execute different instructions and act on different data" and enabling MIMD parallelization of a given program, and is a more general approach than data-parallel and more efficient than the fork-and-join for parallel execution on general purpose multiprocessors.

The (IBM) SPMD is the most common style of parallel programming and can be considered a subcategory of MIMD in that it refers to MIMD execution of a given ("single") program.[7] It is also a prerequisite for research concepts such as active messages and distributed shared memory.

SPMD vs SIMD

[edit]
An example of "Single program, multiple data"

In SPMD parallel execution, multiple autonomous processors simultaneously execute the same program at independent points, rather than in the lockstep that SIMD or SIMT imposes on different data. With SPMD, tasks can be executed on general purpose CPUs. In SIMD the same operation (instruction) is applied on multiple data to manipulate data streams (not to be confused with SIMD or with vector processing where the data is organized as vectors). Another class of processors, GPUs encompass multiple SIMD streams processing. SPMD and SIMD are not mutually exclusive; SPMD parallel execution can include SIMD, or vector, or GPU sub-processing. SPMD has been used for parallel programming of both message passing and shared-memory machine architectures.

Distributed memory

[edit]

On distributed memory computer architectures, SPMD implementations usually employ message passing programming. A distributed memory computer consists of a collection of interconnected, independent computers, called nodes. For parallel execution, each node starts its own program and communicates with other nodes by sending and receiving messages, calling send/receive routines for that purpose. Other parallelization directives such as Barrier synchronization may also be implemented by messages. The messages can be sent by a number of communication mechanisms, such as TCP/IP over Ethernet, or specialized high-speed interconnects such as InfiniBand or Omni-Path. For distributed memory environments, serial sections of the program can be implemented by identical computation of the serial section on all nodes rather than computing the result on one node and sending it to the others, if that improves performance by reducing communication overhead.

Nowadays, the programmer is isolated from the details of the message passing by standard interfaces, such as PVM and MPI.

Distributed memory is the programming style used on parallel supercomputers from homegrown Beowulf clusters to the largest clusters on the Teragrid, as well as present GPU-based supercomputers.

Shared memory

[edit]

On a shared memory machine (a computer with several interconnected CPUs that access the same memory space), the sharing can be implemented in the context of either physically shared memory or logically shared (but physically distributed) memory; in addition to the shared memory, the CPUs in the computer system can also include local (or private) memory. For either of these contexts, synchronization can be enabled with hardware enabled primitives (such as compare-and-swap, or fetch-and-add. For machines that do not have such hardware support, locks can be used and data can be "exchanged" across processors (or, more generally, processes or threads) by depositing the sharable data in a shared memory area. When the hardware does not support shared memory, packing the data as a "message" is often the most efficient way to program (logically) shared memory computers with large number of processors, where the physical memory is local to processors and accessing the memory of another processor takes longer. SPMD on a shared memory machine can be implemented by standard processes (heavyweight) or threads (lightweight).

Shared memory multiprocessing (both symmetric multiprocessing, SMP, and non-uniform memory access, NUMA) presents the programmer with a common memory space and the possibility to parallelize execution. With the (IBM) SPMD model the cooperating processors (or processes) take different paths through the program, using parallel directives (parallelization and synchronization directives, which can utilize compare-and-swap and fetch-and-add operations on shared memory synchronization variables), and perform operations on data in the shared memory ("shared data"); the processors (or processes) can also have access and perform operations on data in their local memory ("private data"). In contrast, with fork-and-join approaches, the program starts executing on one processor and the execution splits in a parallel region, which is started when parallel directives are encountered; in a parallel region, the processors execute a parallel task on different data. A typical example is the parallel DO loop, where different processors work on separate parts of the arrays involved in the loop. At the end of the loop, execution is synchronized (with soft- or hard-barriers[6]), and processors (processes) continue to the next available section of the program to execute. The (IBM) SPMD has been implemented in the current standard interface for shared memory multiprocessing, OpenMP, which uses multithreading, usually implemented by lightweight processes, called threads.

Combination of levels of parallelism

[edit]

Current computers allow exploiting many parallel modes at the same time for maximum combined effect. A distributed memory program using MPI may run on a collection of nodes. Each node may be a shared memory computer and execute in parallel on multiple CPUs using OpenMP. Within each CPU, SIMD vector instructions (usually generated automatically by the compiler) and superscalar instruction execution (usually handled transparently by the CPU itself), such as pipelining and the use of multiple parallel functional units, are used for maximum single CPU speed.

History

[edit]

The acronym SPMD for "Single-Program Multiple-Data" has been used to describe two different computational models for exploiting parallel computing, and this is due to both terms being natural extensions of Flynn's taxonomy.[7] The two respective groups of researchers were unaware of each other's use of the term SPMD to independently describe different models of parallel programming.

The term SPMD was proposed first in 1983 by Michel Auguin (University of Nice Sophia-Antipolis) and Fran?ois Larbey (Thomson/Sintra) in the context of the OPSILA parallel computer and in the context of a fork-and-join and data parallel computational model approach.[1] This computer consisted of a master (controller processor) and SIMD processors (or vector processor mode as proposed by Flynn). In Auguin's SPMD model, the same (parallel) task ("same program") is executed on different (SIMD) processors ("operating in lock-step mode"[1] acting on a part ("slice") of the data-vector. Specifically, their 1985 paper[2] and others[3][1] stated:

We consider the SPMD (Single Program, Multiple Data) operating mode. This mode allows simultaneous execution of the same task (one per processor) but prevents data exchange between processors. Data exchanges are only performed under SIMD mode by means of vector assignments. We assume synchronizations are summed-up to switchings (sic) between SIMD and SPMD operatings [sic] modes using global fork-join primitives.

Starting around the same timeframe (in late 1983 – early 1984), the SPMD term was proposed by Frederica Darema (at IBM at that time, and part of the RP3 group) to define a different SPMD computational model that she proposed,[6][5][4] as a programming model which in the intervening years has been applied to a wide range of general-purpose high-performance computers (including RP3 - the 512-processor IBM Research Parallel Processor Prototype) and has led to the current parallel computing standards. The (IBM) SPMD programming model assumes a multiplicity of processors which operate cooperatively, all executing the same program but can take different paths through the program based on parallelization directives embedded in the program:[6][5][4][9][10]

All processes participating in the parallel computation are created at the beginning of the execution and remain in existence until the end ... [the processors/processes] execute different instructions and act on different data ... the job [(work)] to be done by each process is allocated dynamically ... [i.e. the processes] self-schedule themselves to execute different instructions and act on different data [thus self-assign themselves to cooperate in execution of serial and parallel tasks (as well as replicate tasks) in the program.]

The notion process generalized the term processor in the sense that multiple processes can execute on a processor (to for example exploit larger degrees of parallelism for more efficiency and load-balancing). The (IBM) SPMD model was proposed by Darema as an approach different and more efficient than the fork-and-join that was pursued by all others in the community at that time; it is also more general than just "data-parallel" computational model and can encompass fork-and-join (as a subcategory implementation). The original context of the (IBM) SPMD was the RP3 computer (the 512-prosessor IBM Research Parallel Processor Prototype), which supported general purpose computing, with both distributed and (logically) shared memory.[9] The (IBM) SPMD model was implemented by Darema and IBM colleagues into the EPEX (Environment for Parallel Execution), one of the first prototype programming environments.[6][5][4][9][10][11] The effectiveness of the (IBM) SPMD was demonstrated for a wide class of applications,[9][4] and was implemented in the IBM FORTRAN in 1988,[12] the first vendor-product in parallel programming; and in MPI (1991 and on), OpenMP (1997 and on), and other environments which have adopted and cite the (IBM) SPMD Computational Model.

By the late 1980s, there were many distributed computers with proprietary message passing libraries. The first SPMD standard was PVM. The current de facto standard is MPI.

The Cray parallel directives were a direct predecessor of OpenMP.

References

[edit]
  1. ^ a b c d M. Auguin, F. Larbey (1983). "OPSILA: an advanced SIMD for numerical analysis and signal processing". Microcomputers: Developments in Industry, Business, and Education / Ninth EUROMICRO Symposium on Microprocessing and Microprogramming, Pp 311-318 Madrid, September 13–16, 1983.
  2. ^ a b M. Auguin, F. Labrey (1985). "A Multi-processor SIMD Machine: OPSILA". K. Waldschmidt and B. Myhrhaug Eds, @EUROMICRO, 1985, Elsevier Science Publishers B. V. – North Holland.
  3. ^ a b Auguin, M.; Boeri, F.; Dalban, J.P; Vincent-Carrefour, A. (1987). "Experience Using a SIMD/SPMD Multiprocessor Architecture". Multiprocessing and Microprogramming. 21 (1–5): 171–178. doi:10.1016/0165-6074(87)90034-2.
  4. ^ a b c d e Darema, Frederica (2001). "SPMD model: past, present and future, Recent Advances in Parallel Virtual Machine and Message Passing Interface". 8th European PVM/MPI Users' Group Meeting, Santorini/Thera, Greece, September 23–26, 2001. Lecture Notes in Computer Science 2131.
  5. ^ a b c d F. Darema-Rogers, D. A. George, V. A. Norton, and G. F. Pfister (1985). "A VM Parallel Environment". IBM/RC11225 (1/23/85) and IBM/RC11381(9/19/85).{{cite journal}}: CS1 maint: multiple names: authors list (link)
  6. ^ a b c d e Darema, F.; George, D.A.; Norton, V.A.; Pfister, G.F. (1988). "A single-program-multiple-data computational model for EPEX/FORTRAN". Journal of Parallel Computing. 7: 11–24. doi:10.1016/0167-8191(88)90094-4.
  7. ^ a b Flynn, Michael (1972). "Some Computer Organizations and Their Effectiveness" (PDF). IEEE Transactions on Computers. C-21 (9): 948–960. doi:10.1109/TC.1972.5009071. S2CID 18573685.
  8. ^ Flynn, Michael J. (September 1972). "Some Computer Organizations and Their Effectiveness" (PDF). IEEE Transactions on Computers. C-21 (9): 948–960. doi:10.1109/TC.1972.5009071.
  9. ^ a b c d Darema, Frederica (1987). "Applications Environment for the IBM Research Parallel Processor Prototype (RP3)". IBMRC12627 (3/27/87) and in Proceedings of the 1st International Conference on Supercomputing (ICS'87), by Springer-Verlag (1987).
  10. ^ a b Darema, Frederica (1988). "Parallel Applications Development for Shared Memory Systems". IBM/RC12229(1986) and in Parallel Systems and Computation, G. Paul and G. S. Almasi Eds, Elsevier Science Publishers B. V. (North Holland), 1988.
  11. ^ J. M. Stone, F. Darema-Rogers, V. A. Norton, G. F. Pfister (1985). "Introduction to the VM/EPEX Preprocessor and Reference". IBM/RC11407(9/30/85) and IBM/RC11408 (9/30/85).{{cite journal}}: CS1 maint: multiple names: authors list (link)
  12. ^ Toomey, L. J.; Plachy, E. C.; Scarborough, R. G.; Sahulka, R. J.; Shaw, J. F.; Shannon, A. W. (1988). "IBM Parallel FORTRAN". IBM Systems Journal. 27 (4): 416–435. doi:10.1147/sj.274.0416.
[edit]
阿甘正传珍妮得了什么病 尿素低是什么原因 胃窦肠化是什么意思 幺是什么意思 亢进是什么意思
开业需要准备什么东西 查传染病四项挂什么科 芬太尼是什么药 冰冻三尺非一日之寒什么意思 刮宫是什么意思
20年是什么婚姻 黄瓜炒什么 印鉴是什么意思 6月3日什么星座 地雷是什么意思
白浆是什么 三月出生的是什么星座 月经时间过长是什么原因引起的 每天一杯蜂蜜水有什么好处 湿痹是什么意思
4月29号是什么星座hcv7jop6ns0r.cn 慧五行属什么hcv9jop8ns1r.cn ed病毒是什么hcv9jop5ns9r.cn 10月4号是什么星座hcv9jop0ns0r.cn clarks是什么牌子hcv7jop5ns2r.cn
二五八万是什么意思hcv8jop0ns8r.cn 主动脉硬化是什么意思hcv8jop1ns7r.cn 小腿酸胀吃什么药wzqsfys.com 中药饮片是什么huizhijixie.com c2m模式是什么意思hcv8jop3ns0r.cn
眉毛里面有痣代表什么hcv8jop1ns2r.cn 名列前茅的茅是什么意思hcv8jop0ns6r.cn 什么动物吃草hcv7jop6ns9r.cn sby是什么意思shenchushe.com 办理公证需要什么材料sanhestory.com
很man是什么意思hcv8jop5ns0r.cn 什么道路hcv9jop0ns6r.cn 什么原因引起甲亢gysmod.com 移动迷宫到底讲的什么hcv9jop0ns0r.cn 他喵的什么意思hcv7jop4ns7r.cn
百度