2001年属什么生肖| 胃消化不好吃什么调理| 钢琴八级是什么水平| 尿微量白蛋白高吃什么药| 头晕吃什么药| 朝鲜人一日三餐吃什么| 维多利亚是什么意思| 维生素h是什么| 神经性头痛吃什么药效果好| 努力的意义是什么| 胃老是恶心想吐是什么原因| 风的孩子叫什么| 起死回生是什么意思| flour什么意思| 昆明有什么特产| 什么是凤凰男| 肛裂出血用什么药| 海参几头是什么意思| 双手麻木是什么原因| 为什么贫穷| 什么是遗精| 颈椎引起的头晕是什么症状| 家里为什么有跳蚤| 旻读什么| 膳是什么意思| 为什么月经一次比一次提前| 手足口病吃什么食物| 棕色是什么颜色| 温度计代表什么生肖| 黄精长什么样| 阴虚体质是什么症状| 自卑是什么意思| 操是什么意思| 蟑螂吃什么东西| 男人脚肿是什么原因| 芳华是什么意思| 倾倒是什么意思| 欢喜是什么意思| 颈动脉在什么位置| 红曲米是什么| 坨是什么意思| 安静如鸡什么意思| 什么叫意象| 冰丝是什么材料| 地包天是什么意思| nl是什么单位| 护腕有什么用| 蓝色配什么颜色好看| 来月经头晕是什么原因| 结果是什么意思| 胸部有硬块挂什么科| 为什么润月| 吃猪心有什么好处和坏处| 浪琴名匠系列什么档次| 岁月匆匆是什么意思| 专升本有什么专业| 推介是什么意思| 苦荞茶有什么功效| 茶减一笔是什么字| gc什么意思| 拔草是什么意思| 进国企需要什么条件| 房性早搏吃什么药| 继发性高血压是什么意思| 考警校需要什么条件| 清分日期是什么意思| dw什么意思| 三亚免税店什么最便宜| 苹果什么时候吃最好| 尿常规能查出什么病| 鱼露可以用什么代替| mts是什么单位| 芳华是什么意思| 健康四大基石是什么| 定坤丹什么时候吃最好| 什么是几何图形| cc是什么意思| 10月31日什么星座| 会厌炎吃什么药最有效| 1870年是什么朝代| 金蟾是什么| 拉缸是什么意思| 出脚汗是什么原因| 固精是什么意思| 孕妇缺碘吃什么补最快| ir是什么意思| 唐氏筛查临界风险是什么意思| 眉头长痘痘什么原因| 8月31号是什么星座| 芒果不能和什么食物一起吃| 为什么外阴老是长疖子| oversize是什么意思| 苏州为什么叫姑苏| 什么叫近视| 0是什么意思网络语言| 早上11点是什么时辰| hpv会有什么症状| 腰疼是什么原因引起的| 头昏是什么原因引起的| 世界上最小的国家是什么| 我俩太不公平这是什么歌| 板带是什么| 药剂师是干什么的| 性交是什么感觉| 戊是什么生肖| 怀孕为什么会引起甲亢| 感恩节为什么要吃火鸡| 脂蛋白高是什么原因| 曾毅玲花什么关系| 手工diy是什么意思| 排卵期是在什么时候| 一吃东西就肚子疼是什么原因| 排尿困难吃什么药好| z什么意思| 虾米吃什么| 天干指的是什么| 篱笆是什么东西| 肠胀气是什么原因| 眼底出血用什么眼药水| 五经指什么| 天蝎座和什么星座配| 葛粉吃了有什么好处| 4c是什么| 第一次是什么意思| 巴扎黑是什么意思| 6424什么意思| 为什么同房过后会出血| 杨梅有什么功效与作用| 大便黑色是什么原因| 经常打屁是什么原因| 小巧玲珑是什么意思| 血糖高适合喝什么汤| 梦到狗什么意思| 诡辩是什么意思| 生吃西红柿有什么好处| 胡萝卜吃多了有什么坏处| 萎缩性鼻炎用什么药| 酮体是什么| 小儿疳积是什么意思| 什么的菊花| 小寨附近有什么好玩的| 骑马野战指什么生肖| 红色学士服是什么学位| 压箱底是什么意思| 复配是什么意思| 梦到坟墓是什么意思| cr是什么意思| 吃无花果有什么好处| 八月十二是什么星座| 烛是什么意思| 胃不舒服吃什么水果好| 58岁属什么生肖| 小厮是什么意思| 母后是什么意思| 女人为什么会患得患失| 腿有淤青是什么原因| f4是什么意思| 什么水果清肝火| 为什么高考要体检| 子宫下垂吃什么药| 胃酸吃什么能马上缓解| 血小板低吃什么药| 梦到挖坟墓是什么意思| 舌吻什么感觉| 小孩热感冒吃什么药好| 什么东西不能吃| 舌头不舒服挂什么科| 过桥米线为什么叫过桥| 枸杞对女人有什么好处| 叶酸在什么食物里最多| 舌苔厚发黄是什么原因| 囊肿挂什么科| 马与什么属相相克相冲| 草莓什么季节| 脉冲什么意思| ram是什么动物| 两横一竖是什么字| 91年是什么年| 什么人什么己| 什么是重生| 日抛什么意思| 女菩萨是什么意思| 什么津津| 烦恼的意思是什么| 弯是什么意思| 农历闰月有什么规律| 吃什么降血脂最好| 女性血热吃什么好得快| 为什么会有鼻炎| 吃什么容易消化| 品鉴是什么意思| 花嫁是什么意思| 梦见小青蛇是什么预兆| 台卡是什么| 什么芒果好吃| 哀鸿遍野什么意思| 622188开头是什么银行| 一九八八年属什么生肖| 榴莲为什么苦| 糖类抗原ca199偏高是什么原因| 大脑供血不足头晕吃什么药最好| 荨麻疹什么东西不能吃| 摇滚是什么意思| 族谱是什么意思| 什么相处| 肝郁化火吃什么中成药| 688是什么意思| 男人本色是什么意思| 什么吃草吞吞吐吐歇后语| 宫颈出血是什么症状| 胆固醇高是什么意思| 什么叫窦性心律| 1991年是什么命| obsidian什么意思| 月经期同房有什么危害| 金庸的原名叫什么| 支配是什么意思| 暴饮暴食会得什么病| 腱鞘炎是什么病| 美国为什么叫鹰酱| 小三是什么意思| 以貌取人是什么意思| 免疫抑制是什么意思| 女大十八变是什么意思| 腐生是什么意思| gold是什么意思| 五险一金的一金是什么| 狗发烧吃什么药| 蛇属于什么动物| 男科什么医院好| 射手座属于什么象星座| 吃什么能让月经量增多| 尿素氮偏低是什么意思| 疑难杂症是什么意思| 曹操的脸谱是什么颜色| 跟腱炎贴什么膏药最好| lancome是什么牌子的| 臃肿是什么意思| ihc是什么意思| 生长因子是什么| 肚脐周围痛是什么原因| 与虎谋皮是什么意思| bb是什么意思| joeone是什么牌子| 夹心饼干是什么意思| 但愿是什么意思| 北斗是什么| 量化是什么意思| 呆小症是缺乏什么激素| 乳房疼是什么原因| sby是什么意思| 为什么会经常口腔溃疡| 腰疼看什么科| 瑜伽是什么意思| 霉菌性阴道炎有什么症状| 骨折用什么药恢复快| 什么是复句| 什么是双| 吃什么能去黑眼圈| 女性盆腔炎什么症状| 喉咙干是什么病的前兆| ev病毒是什么| 三七粉什么时间喝好| 鸟字旁的字和什么有关| 百度Jump to content

头顶冒汗是什么原因

From Wikipedia, the free encyclopedia
百度 汪洋强调,打好三大攻坚战将为我国建设社会主义现代化国家打下坚实基础,同时也将为民营经济实现高质量发展创造有利的经济和社会环境。

In compiler design, static single assignment form (often abbreviated as SSA form or simply SSA) is a type of intermediate representation (IR) where each variable is assigned exactly once. SSA is used in most high-quality optimizing compilers for imperative languages, including LLVM, the GNU Compiler Collection, and many commercial compilers.

There are efficient algorithms for converting programs into SSA form. To convert to SSA, existing variables in the original IR are split into versions, new variables typically indicated by the original name with a subscript, so that every definition gets its own version. Additional statements that assign to new versions of variables may also need to be introduced at the join point of two control flow paths. Converting from SSA form to machine code is also efficient.

SSA makes numerous analyses needed for optimizations easier to perform, such as determining use-define chains, because when looking at a use of a variable there is only one place where that variable may have received a value. Most optimizations can be adapted to preserve SSA form, so that one optimization can be performed after another with no additional analysis. The SSA based optimizations are usually more efficient and more powerful than their non-SSA form prior equivalents.

In functional language compilers, such as those for Scheme and ML, continuation-passing style (CPS) is generally used. SSA is formally equivalent to a well-behaved subset of CPS excluding non-local control flow, so optimizations and transformations formulated in terms of one generally apply to the other. Using CPS as the intermediate representation is more natural for higher-order functions and interprocedural analysis. CPS also easily encodes call/cc, whereas SSA does not.[1]

History

[edit]

SSA was developed in the 1980s by several researchers at IBM. Kenneth Zadeck, a key member of the team, moved to Brown University as development continued.[2][3] A 1986 paper introduced birthpoints, identity assignments, and variable renaming such that variables had a single static assignment.[4] A subsequent 1987 paper by Jeanne Ferrante and Ronald Cytron[5] proved that the renaming done in the previous paper removes all false dependencies for scalars.[3] In 1988, Barry Rosen, Mark N. Wegman, and Kenneth Zadeck replaced the identity assignments with Φ-functions, introduced the name "static single-assignment form", and demonstrated a now-common SSA optimization.[6] The name Φ-function was chosen by Rosen to be a more publishable version of "phony function".[3] Alpern, Wegman, and Zadeck presented another optimization, but using the name "static single assignment".[7] Finally, in 1989, Rosen, Wegman, Zadeck, Cytron, and Ferrante found an efficient means of converting programs to SSA form.[8]

Benefits

[edit]

The primary usefulness of SSA comes from how it simultaneously simplifies and improves the results of a variety of compiler optimizations, by simplifying the properties of variables. For example, consider this piece of code:

y := 1
y := 2
x := y

Humans can see that the first assignment is not necessary, and that the value of y being used in the third line comes from the second assignment of y. A program would have to perform reaching definition analysis to determine this. But if the program is in SSA form, both of these are immediate:

y1 := 1
y2 := 2
x1 := y2

Compiler optimization algorithms that are either enabled or strongly enhanced by the use of SSA include:

  • Constant folding – conversion of computations from runtime to compile time, e.g. treat the instruction a=3*4+5; as if it were a=17;
  • Value range propagation[9] – precompute the potential ranges a calculation could be, allowing for the creation of branch predictions in advance
  • Sparse conditional constant propagation – range-check some values, allowing tests to predict the most likely branch
  • Dead-code elimination – remove code that will have no effect on the results
  • Global value numbering – replace duplicate calculations producing the same result
  • Partial-redundancy elimination – removing duplicate calculations previously performed in some branches of the program
  • Strength reduction – replacing expensive operations by less expensive but equivalent ones, e.g. replace integer multiply or divide by powers of 2 with the potentially less expensive shift left (for multiply) or shift right (for divide).
  • Register allocation – optimize how the limited number of machine registers may be used for calculations

Converting to SSA

[edit]

Converting ordinary code into SSA form is primarily a matter of replacing the target of each assignment with a new variable, and replacing each use of a variable with the "version" of the variable reaching that point. For example, consider the following control-flow graph:

An example control-flow graph, before conversion to SSA
An example control-flow graph, before conversion to SSA

Changing the name on the left hand side of "x x - 3" and changing the following uses of x to that new name would leave the program unaltered. This can be exploited in SSA by creating two new variables: x1 and x2, each of which is assigned only once. Likewise, giving distinguishing subscripts to all the other variables yields:

An example control-flow graph, partially converted to SSA
An example control-flow graph, partially converted to SSA

It is clear which definition each use is referring to, except for one case: both uses of y in the bottom block could be referring to either y1 or y2, depending on which path the control flow took.

To resolve this, a special statement is inserted in the last block, called a Φ (Phi) function. This statement will generate a new definition of y called y3 by "choosing" either y1 or y2, depending on the control flow in the past.

An example control-flow graph, fully converted to SSA
An example control-flow graph, fully converted to SSA

Now, the last block can simply use y3, and the correct value will be obtained either way. A Φ function for x is not needed: only one version of x, namely x2 is reaching this place, so there is no problem (in other words, Φ(x2,x2)=x2).

Given an arbitrary control-flow graph, it can be difficult to tell where to insert Φ functions, and for which variables. This general question has an efficient solution that can be computed using a concept called dominance frontiers (see below).

Φ functions are not implemented as machine operations on most machines. A compiler can implement a Φ function by inserting "move" operations at the end of every predecessor block. In the example above, the compiler might insert a move from y1 to y3 at the end of the middle-left block and a move from y2 to y3 at the end of the middle-right block. These move operations might not end up in the final code based on the compiler's register allocation procedure. However, this approach may not work when simultaneous operations are speculatively producing inputs to a Φ function, as can happen on wide-issue machines. Typically, a wide-issue machine has a selection instruction used in such situations by the compiler to implement the Φ function.

Computing minimal SSA using dominance frontiers

[edit]

In a control-flow graph, a node A is said to strictly dominate a different node B if it is impossible to reach B without passing through A first. In other words, if node B is reached, then it can be assumed that A has run. A is said to dominate B (or B to be dominated by A) if either A strictly dominates B or A = B.

A node which transfers control to a node A is called an immediate predecessor of A.

The dominance frontier of node A is the set of nodes B where A does not strictly dominate B, but does dominate some immediate predecessor of B. These are the points at which multiple control paths merge back together into a single path.

For example, in the following code:

[1] x = random()
if x < 0.5
    [2] result = "heads"
else
    [3] result = "tails"
end
[4] print(result)

Node 1 strictly dominates 2, 3, and 4 and the immediate predecessors of node 4 are nodes 2 and 3.

Dominance frontiers define the points at which Φ functions are needed. In the above example, when control is passed to node 4, the definition of result used depends on whether control was passed from node 2 or 3. Φ functions are not needed for variables defined in a dominator, as there is only one possible definition that can apply.

There is an efficient algorithm for finding dominance frontiers of each node. This algorithm was originally described in "Efficiently Computing Static Single Assignment Form and the Control Graph" by Ron Cytron, Jeanne Ferrante, et al. in 1991.[10]

Keith D. Cooper, Timothy J. Harvey, and Ken Kennedy of Rice University describe an algorithm in their paper titled A Simple, Fast Dominance Algorithm:[11]

for each node b
    dominance_frontier(b) := {}
for each node b
    if the number of immediate predecessors of b ≥ 2
        for each p in immediate predecessors of b
            runner := p
            while runner ≠ idom(b)
                dominance_frontier(runner) := dominance_frontier(runner) ∪ { b }
                runner := idom(runner)

In the code above, idom(b) is the immediate dominator of b, the unique node that strictly dominates b but does not strictly dominate any other node that strictly dominates b.

Variations that reduce the number of Φ functions

[edit]

"Minimal" SSA inserts the minimal number of Φ functions required to ensure that each name is assigned a value exactly once and that each reference (use) of a name in the original program can still refer to a unique name. (The latter requirement is needed to ensure that the compiler can write down a name for each operand in each operation.)

However, some of these Φ functions could be dead. For this reason, minimal SSA does not necessarily produce the fewest Φ functions that are needed by a specific procedure. For some types of analysis, these Φ functions are superfluous and can cause the analysis to run less efficiently.

Pruned SSA

[edit]

Pruned SSA form is based on a simple observation: Φ functions are only needed for variables that are "live" after the Φ function. (Here, "live" means that the value is used along some path that begins at the Φ function in question.) If a variable is not live, the result of the Φ function cannot be used and the assignment by the Φ function is dead.

Construction of pruned SSA form uses live-variable information in the Φ function insertion phase to decide whether a given Φ function is needed. If the original variable name isn't live at the Φ function insertion point, the Φ function isn't inserted.

Another possibility is to treat pruning as a dead-code elimination problem. Then, a Φ function is live only if any use in the input program will be rewritten to it, or if it will be used as an argument in another Φ function. When entering SSA form, each use is rewritten to the nearest definition that dominates it. A Φ function will then be considered live as long as it is the nearest definition that dominates at least one use, or at least one argument of a live Φ.

Semi-pruned SSA

[edit]

Semi-pruned SSA form[12] is an attempt to reduce the number of Φ functions without incurring the relatively high cost of computing live-variable information. It is based on the following observation: if a variable is never live upon entry into a basic block, it never needs a Φ function. During SSA construction, Φ functions for any "block-local" variables are omitted.

Computing the set of block-local variables is a simpler and faster procedure than full live-variable analysis, making semi-pruned SSA form more efficient to compute than pruned SSA form. On the other hand, semi-pruned SSA form will contain more Φ functions.

Block arguments

[edit]

Block arguments are an alternative to Φ functions that is representationally identical but in practice can be more convenient during optimization. Blocks are named and take a list of block arguments, notated as function parameters. When calling a block the block arguments are bound to specified values. MLton, Swift SIL, and LLVM MLIR use block arguments.[13]

Converting out of SSA form

[edit]

SSA form is not normally used for direct execution (although it is possible to interpret SSA[14]), and it is frequently used "on top of" another IR with which it remains in direct correspondence. This can be accomplished by "constructing" SSA as a set of functions that map between parts of the existing IR (basic blocks, instructions, operands, etc.) and its SSA counterpart. When the SSA form is no longer needed, these mapping functions may be discarded, leaving only the now-optimized IR.

Performing optimizations on SSA form usually leads to entangled SSA-Webs, meaning there are Φ instructions whose operands do not all have the same root operand. In such cases color-out algorithms are used to come out of SSA. Naive algorithms introduce a copy along each predecessor path that caused a source of different root symbol to be put in Φ than the destination of Φ. There are multiple algorithms for coming out of SSA with fewer copies, most use interference graphs or some approximation of it to do copy coalescing.[15]

Extensions

[edit]

Extensions to SSA form can be divided into two categories.

Renaming scheme extensions alter the renaming criterion. Recall that SSA form renames each variable when it is assigned a value. Alternative schemes include static single use form (which renames each variable at each statement when it is used) and static single information form (which renames each variable when it is assigned a value, and at the post-dominance frontier).

Feature-specific extensions retain the single assignment property for variables, but incorporate new semantics to model additional features. Some feature-specific extensions model high-level programming language features like arrays, objects and aliased pointers. Other feature-specific extensions model low-level architectural features like speculation and predication.

Compilers using SSA form

[edit]

Open-source

[edit]
  • Mono uses SSA in its JIT compiler called Mini
  • WebKit uses SSA in its JIT compilers.[16][17]
  • Swift defines its own SSA form above LLVM IR, called SIL (Swift Intermediate Language).[18][19]
  • The Erlang compiler was rewritten in OTP 22.0 to "internally use an intermediate representation based on Static Single Assignment (SSA)", with plans for further optimizations built on top of SSA in future releases.[20]
  • The LLVM Compiler Infrastructure uses SSA form for all scalar register values (everything except memory) in its primary code representation. SSA form is only eliminated once register allocation occurs, late in the compile process (often at link time).
  • The GNU Compiler Collection (GCC) makes extensive use of SSA since version 4 (released in April 2005). The frontends generate "GENERIC" code that is then converted into "GIMPLE" code by the "gimplifier". High-level optimizations are then applied on the SSA form of "GIMPLE". The resulting optimized intermediate code is then translated into RTL, on which low-level optimizations are applied. The architecture-specific backends finally turn RTL into assembly language.
  • Go (1.7: for x86-64 architecture only; 1.8: for all supported architectures).[21][22]
  • IBM's open source adaptive Java virtual machine, Jikes RVM, uses extended Array SSA, an extension of SSA that allows analysis of scalars, arrays, and object fields in a unified framework. Extended Array SSA analysis is only enabled at the maximum optimization level, which is applied to the most frequently executed portions of code.
  • The Mozilla Firefox SpiderMonkey JavaScript engine uses SSA-based IR.[23]
  • The Chromium V8 JavaScript engine implements SSA in its Crankshaft compiler infrastructure as announced in December 2010
  • PyPy uses a linear SSA representation for traces in its JIT compiler.
  • The Android Runtime[24] and the Dalvik Virtual Machine use SSA.[25]
  • The Standard ML compiler MLton uses SSA in one of its intermediate languages.
  • LuaJIT makes heavy use of SSA-based optimizations.[26]
  • The PHP and Hack compiler HHVM uses SSA in its IR.[27]
  • The OCaml compiler uses SSA in its CMM IR (which stands for C--).[28]
  • libFirm, a library for use as the middle and back ends of a compiler, uses SSA form for all scalar register values until code generation by use of an SSA-aware register allocator.[29]
  • Various Mesa drivers via NIR, an SSA representation for shading languages.[30]

Commercial

[edit]

Research and abandoned

[edit]
  • The ETH Oberon-2 compiler was one of the first public projects to incorporate "GSA", a variant of SSA.
  • The Open64 compiler used SSA form in its global scalar optimizer, though the code is brought into SSA form before and taken out of SSA form afterwards. Open64 uses extensions to SSA form to represent memory in SSA form as well as scalar values.
  • In 2002, researchers modified IBM's JikesRVM (named Jalape?o at the time) to run both standard Java bytecode and a typesafe SSA (SafeTSA) bytecode class files, and demonstrated significant performance benefits to using the SSA bytecode.
  • jackcc is an open-source compiler for the academic instruction set Jackal 3.0. It uses a simple 3-operand code with SSA for its intermediate representation. As an interesting variant, it replaces Φ functions with a so-called SAME instruction, which instructs the register allocator to place the two live ranges into the same physical register.
  • The Illinois Concert Compiler circa 1994[36] used a variant of SSA called SSU (Static Single Use) which renames each variable when it is assigned a value, and in each conditional context in which that variable is used; essentially the static single information form mentioned above. The SSU form is documented in John Plevyak's Ph.D Thesis.
  • The COINS compiler uses SSA form optimizations as explained here.
  • Reservoir Labs' R-Stream compiler supports non-SSA (quad list), SSA and SSI (Static Single Information[37]) forms.[38]
  • Although not a compiler, the Boomerang decompiler uses SSA form in its internal representation. SSA is used to simplify expression propagation, identifying parameters and returns, preservation analysis, and more.
  • DotGNU Portable.NET used SSA in its JIT compiler.

References

[edit]

Notes

[edit]
  1. ^ Kelsey, Richard A. (1995). "A correspondence between continuation passing style and static single assignment form" (PDF). Papers from the 1995 ACM SIGPLAN workshop on Intermediate representations. pp. 13–22. doi:10.1145/202529.202532. ISBN 0897917545. S2CID 6207179.
  2. ^ Rastello & Tichadou 2022, sec. 1.4.
  3. ^ a b c Zadeck, Kenneth (April 2009). The Development of Static Single Assignment Form (PDF). Static Single-Assignment Form Seminar. Autrans, France.
  4. ^ Cytron, Ron; Lowry, Andy; Zadeck, F. Kenneth (1986). "Code motion of control structures in high-level languages". Proceedings of the 13th ACM SIGACT-SIGPLAN symposium on Principles of programming languages - POPL '86. pp. 70–85. doi:10.1145/512644.512651. S2CID 9099471.
  5. ^ Cytron, Ronald Kaplan; Ferrante, Jeanne. What's in a name? Or, the value of renaming for parallelism detection and storage allocation. International Conference on Parallel Processing, ICPP'87 1987. pp. 19–27.
  6. ^ Barry Rosen; Mark N. Wegman; F. Kenneth Zadeck (1988). "Global value numbers and redundant computations" (PDF). Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '88. pp. 12–27. doi:10.1145/73560.73562. ISBN 0-89791-252-7.
  7. ^ Alpern, B.; Wegman, M. N.; Zadeck, F. K. (1988). "Detecting equality of variables in programs". Proceedings of the 15th ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '88. pp. 1–11. doi:10.1145/73560.73561. ISBN 0897912527. S2CID 18384941.
  8. ^ Cytron, Ron; Ferrante, Jeanne; Rosen, Barry K.; Wegman, Mark N. & Zadeck, F. Kenneth (1991). "Efficiently computing static single assignment form and the control dependence graph" (PDF). ACM Transactions on Programming Languages and Systems. 13 (4): 451–490. CiteSeerX 10.1.1.100.6361. doi:10.1145/115372.115320. S2CID 13243943.
  9. ^ value range propagation
  10. ^ Cytron, Ron; Ferrante, Jeanne; Rosen, Barry K.; Wegman, Mark N.; Zadeck, F. Kenneth (1 October 1991). "Efficiently computing static single assignment form and the control dependence graph". ACM Transactions on Programming Languages and Systems. 13 (4): 451–490. doi:10.1145/115372.115320. S2CID 13243943.
  11. ^ Cooper, Keith D.; Harvey, Timothy J.; Kennedy, Ken (2001). A Simple, Fast Dominance Algorithm (PDF) (Technical report). Rice University, CS Technical Report 06-33870. Archived from the original (PDF) on 2025-08-14.
  12. ^ Briggs, Preston; Cooper, Keith D.; Harvey, Timothy J.; Simpson, L. Taylor (1998). Practical Improvements to the Construction and Destruction of Static Single Assignment Form (PDF) (Technical report). Archived from the original (PDF) on 2025-08-14.
  13. ^ "Block Arguments vs PHI nodes - MLIR Rationale". mlir.llvm.org. Retrieved 4 March 2022.
  14. ^ von Ronne, Jeffery; Ning Wang; Michael Franz (2004). "Interpreting programs in static single assignment form". Proceedings of the 2004 workshop on Interpreters, virtual machines and emulators - IVME '04. p. 23. doi:10.1145/1059579.1059585. ISBN 1581139098. S2CID 451410.
  15. ^ Boissinot, Benoit; Darte, Alain; Rastello, Fabrice; Dinechin, Beno?t Dupont de; Guillon, Christophe (2008). "Revisiting Out-of-SSA Translation for Correctness, Code Quality, and Efficiency". HAL-Inria Cs.DS: 14.
  16. ^ "Introducing the WebKit FTL JIT". 13 May 2014.
  17. ^ "Introducing the B3 JIT Compiler". 15 February 2016.
  18. ^ "Swift Intermediate Language (GitHub)". GitHub. 30 October 2021.
  19. ^ "Swift's High-Level IR: A Case Study of Complementing LLVM IR with Language-Specific Optimization, LLVM Developers Meetup 10/2015". YouTube. 9 November 2015. Archived from the original on 2025-08-14.
  20. ^ "OTP 22.0 Release Notes".
  21. ^ "Go 1.7 Release Notes - The Go Programming Language". golang.org. Retrieved 2025-08-14.
  22. ^ "Go 1.8 Release Notes - The Go Programming Language". golang.org. Retrieved 2025-08-14.
  23. ^ "IonMonkey Overview".,
  24. ^ The Evolution of ART - Google I/O 2016. Google. 25 May 2016. Event occurs at 3m47s.
  25. ^ Ramanan, Neeraja (12 Dec 2011). "JIT through the ages" (PDF).
  26. ^ "Bytecode Optimizations". the LuaJIT project.
  27. ^ "HipHop Intermediate Representation (HHIR)". GitHub. 30 October 2021.
  28. ^ Chambart, Pierre; Laviron, Vincent; Pinto, Dario (2025-08-14). "Behind the Scenes of the OCaml Optimising Compiler". OCaml Pro.
  29. ^ "Firm - Optimization and Machine Code Generation".
  30. ^ Ekstrand, Jason (16 December 2014). "Reintroducing NIR, a new IR for mesa".
  31. ^ "The Java HotSpot Performance Engine Architecture". Oracle Corporation.
  32. ^ "Introducing a new, advanced Visual C++ code optimizer". 4 May 2016.
  33. ^ "SPIR-V spec" (PDF).
  34. ^ Sarkar, V. (May 1997). "Automatic selection of high-order transformations in the IBM XL FORTRAN compilers" (PDF). IBM Journal of Research and Development. 41 (3). IBM: 233–264. doi:10.1147/rd.413.0233.
  35. ^ Chakrabarti, Gautam; Grover, Vinod; Aarts, Bastiaan; Kong, Xiangyun; Kudlur, Manjunath; Lin, Yuan; Marathe, Jaydeep; Murphy, Mike; Wang, Jian-Zhong (2012). "CUDA: Compiling and optimizing for a GPU platform". Procedia Computer Science. 9: 1910–1919. doi:10.1016/j.procs.2012.04.209.
  36. ^ "Illinois Concert Project". Archived from the original on 2025-08-14.
  37. ^ Ananian, C. Scott; Rinard, Martin (1999). Static Single Information Form (PDF) (Technical report). CiteSeerX 10.1.1.1.9976.
  38. ^ Encyclopedia of Parallel Computing.

General references

[edit]
[edit]
一个月一个太念什么 中央党校什么级别 装清高是什么意思 过敏性咳嗽有什么症状 谆谆教诲什么意思
山莨菪碱为什么叫6542 脖子左侧疼是什么前兆 女人缺铁性贫血吃什么好 腰肌劳损是什么原因引起的 6月23号是什么星座
三月二十是什么星座 琉璃色是什么颜色 六月份什么星座 十二指肠溃疡是什么原因引起的 翠字五行属什么
人为什么会磨牙 心悸吃什么药 每天吃一个西红柿有什么好处 稽留流产是什么意思 什么是宇宙
大姨妈来了喝什么好hcv8jop5ns4r.cn 经常耳鸣是什么原因dayuxmw.com 女性分泌物增多发黄是什么原因hcv8jop7ns1r.cn 跑步机cal是什么意思hcv9jop0ns2r.cn 取笑是什么意思hcv8jop9ns5r.cn
电脑pin是什么意思hcv8jop0ns6r.cn 深井冰是什么意思hcv8jop5ns5r.cn 衣食父母什么意思hanqikai.com 阈值是什么意思hcv7jop7ns1r.cn 紫菜是什么颜色hcv9jop2ns4r.cn
细菌性阴道炎是什么原因引起的hcv9jop3ns1r.cn 什么叫绝对值hcv9jop6ns3r.cn 一月30号是什么星座hebeidezhi.com 虾青素有什么功效hcv8jop8ns4r.cn 2023年是属什么生肖hcv8jop7ns6r.cn
七六年属什么生肖hcv8jop5ns5r.cn 毛囊炎是什么引起的hcv8jop5ns0r.cn 低密度脂蛋白高有什么危害hcv9jop3ns6r.cn 美妞是什么意思dayuxmw.com 做梦吃肉是什么征兆gangsutong.com
百度