牡丹什么时候开花| 胭脂是什么| 畏首畏尾是什么意思| 遗精吃什么药最好| 心口疼是什么原因引起的| 猪拉稀用什么药最快| 双肺斑索是什么意思| alex是什么意思| 30岁以上适合用什么牌子的护肤品| 红虫是什么的幼虫| 宫腔内高回声是什么意思| 头晕是什么病的前兆| 手上月牙代表什么| 吃什么清肺养肺| 6月30日什么星座| 文书是什么| 牡丹花什么时候开| 腿肿吃什么药消肿最快最有效| 左下腹疼痛挂什么科| 草果是什么| 精干是什么意思| 孕妇吃什么血糖降得快| 人日是什么意思| 蔓越莓是什么水果| 铁锚是什么意思| 乙肝有什么明显的症状| 天梭属于什么档次| 天才是指什么生肖| 频繁大便是什么原因| 付诸东流是什么意思| 540是什么意思| 贝伐珠单抗是什么药| 悟性是什么意思| 对称是什么意思| 汗水多吃什么药| XXJ什么意思| 头发麻是什么原因| 吃什么好排大便| 炖牛肉放什么调料最好| 一什么桌子| 氢化植物油是什么| 什么是筋膜| 榴莲吃多了有什么坏处| 最好的红酒是什么牌子| 参详意思是什么| 孕妇吃葡萄对胎儿有什么好处| 高血压二级是什么意思| 芙蓉粉是什么颜色| 晚上睡觉出汗是什么原因| 大公鸡衣服是什么牌子| 类风湿有什么症状| 胃炎吃什么好| 脾虚湿气重吃什么好| 老年人屁多是什么原因| 气溶胶是什么| 石家庄古代叫什么名字| 经常肚子疼拉肚子是什么原因| 嘴唇挂什么科| 什么时候降温| 双肺条索是什么意思| 谷氨酰转肽酶高什么原因| 中医康复技术学什么| 蓝莓是什么颜色| 什么是导管| 什么是黄体破裂| 暗喻是什么意思| 梦见牛顶我是什么意思| 油性皮肤适合用什么牌子的护肤品| 痹症是什么病| 煜怎么读音是什么意思| 泛性恋是什么| 11月25日什么星座| 蒲公英和玫瑰花一起泡有什么功效| 中医经方是什么意思| 为什么不一样| 桥本氏病是什么病| 降头术是什么| 右耳朵耳鸣是什么原因| br是什么意思| 脾阳虚吃什么药| 眼睛疼吃什么药效果最好| 1997年属什么| 佯装是什么意思| 糖尿病都有什么症状| 青云志3什么时候上映| 血脂高吃什么食物好| 肌酸是什么| 纤支镜主要检查什么| clinic是什么意思| 4月25号是什么星座| 牙龈化脓是什么原因| 吃人肉会得什么病| 牛奶不能和什么一起吃| 打太极拳有什么好处| 备孕要注意什么| 记号笔用什么能擦掉| 一什么画| 女士内裤用什么洗最好| 胸口容易出汗是什么原因| 讲究是什么意思| 胃疼恶心吃什么药效果好| 医保报销是什么意思| 纷至沓来是什么意思| 武夷岩茶是什么茶| 胃胀吃什么| 嘴巴里长水泡是什么原因| 脚面浮肿是什么原因| 大学休学1年有什么影响| 藏海花是什么花| 晚上睡觉老做梦是什么原因| 不寐病属于什么病症| 梦见邻居是什么意思| 新生儿什么时候可以喝水| 1月16日是什么星座| 虚心接受是什么意思| 虚岁30岁属什么生肖| body是什么意思| 清洁度三度什么意思| 心脏不大是什么意思| 柬埔寨有什么特产| 细小是什么病| 蛋白石是什么石头| 饭后腹胀是什么原因| 口酸吃什么药效果好| juicy什么意思| 木瓜是什么季节的水果| 坚字五行属什么| 麻醉剂是什么| 什么水果可以减肥刮油脂| 加拿大的国宝是什么动物| 月办念什么| infp是什么意思| vos是什么意思| 农历二月是什么月| 阖闾和夫差是什么关系| 急性肠胃炎吃什么药效果好| 什么减肥药效果最好而且不反弹| 尿中红细胞高是什么原因| 今天是什么节气| 肠子长息肉有什么症状| 古尔丹代价是什么| 71岁属什么| moss是什么意思| 沙漠为什么是三点水| 痔疮挂什么科室| 冰箱eco是什么意思| 虫洞是什么| 耳顺是什么意思| 六月份生日是什么星座| 什么病会引起背部疼痛| 什么是耳石症| 酩酊是什么意思| 风口浪尖是什么意思| 亲什么意思| 77岁属什么生肖| 猪横利是什么| 烂嘴是什么原因| 互粉是什么意思| 做俯卧撑有什么好处| 香港有什么东西值得买| 男人时间短吃什么药好| mj什么意思| 血常规异常是什么意思| 拌黄瓜需要什么调料| 火凤凰是什么意思| 什么是头寸| xpe是什么材料| 天蝎座与什么星座最配| 挛缩是什么意思| 入职体检70元一般检查什么| 正畸是什么意思| 孕酮低吃什么药| 傻瓜是什么生肖| 心肌梗塞有什么症状| 沙漠为什么是三点水| 偷是什么生肖| 白血球低吃什么补得快| 小孩什么时候会说话| 葡萄糖偏高有什么问题| 肩胛骨痛挂什么科| dunhill是什么品牌| 营养心脏最好的药是什么药| pks是什么意思| 物流是什么| 等着我为什么停播了| circles是什么意思| 焖面用什么面条| 膀胱炎是什么症状表现| 皮牙子是什么| 为什么手机会发烫| 咳嗽吃什么药最好| 04年属什么生肖| 甘蔗什么时候成熟| 细菌性阴道炎吃什么药| 可怜巴巴的意思是什么| 淡奶油能做什么| 甲胎蛋白高是什么原因| 珙桐属于什么植物| 孩子发烧吃什么饭菜好| 什么大| 乳糖不耐受可以喝什么奶| 粘纤是什么| 精神障碍是什么病| 房产证和土地证有什么区别| 平字五行属什么| 眼睑浮肿是什么原因| 疼和痛有什么区别| 一个王一个八念什么| 肌酐高是什么原因造成的| 什么是基因突变| 长痘吃什么水果好| 百香果有什么营养| 白带发黄吃什么药| 润滑油可以用什么代替| 看嗓子去医院挂什么科| 离子水是什么水| im是什么意思| 高温天气喝什么水最好| 红薯开花预示着什么| 什么是盗汗症状| 一声什么| 苦瓜和什么搭配最好| 脸过敏发红痒擦什么药| 什么是姜黄| 数字17代表什么意思| 防空警报是什么| 卵巢早衰是什么原因引起的| 变质是什么意思| 什么是白脉病| 乙肝抗体阳性是什么意思| 鲤鱼打挺是什么意思| 尿常规查什么| 丈夫的弟弟叫什么| 布洛芬不能和什么药一起吃| camouflage什么意思| 尿酸高什么东西不能吃| 为什么会长溃疡| sp是什么意思| 军字五行属什么| 拉肚子吃什么抗生素| 糜烂性胃炎有什么症状| 左侧小腹疼是什么原因| 口加女念什么| 黄连膏有什么功效和作用| 经常上火口腔溃疡是什么原因| 左氧氟沙星有什么副作用| 一个月来两次月经是什么原因| spi是什么意思| 更年期什么时候开始| lee是什么档次| 女人梦见猫是什么预兆| 头发湿着睡觉有什么害处| 吃什么有助于骨头愈合| 郝字五行属什么| 甲磺酸倍他司汀片治什么病| 结肠多发憩室是什么意思| 公元前3000年是什么朝代| 焦虑症吃什么中成药能根治| 脸上黑色的小点是什么| 2002年属马的是什么命| 猪精是什么意思| 黑下打信是什么任务| 木是什么意思| 草字头加个弓念什么| 百度Jump to content

酸辣土豆丝用白醋好还是陈醋好 酸辣土豆丝用什么辣椒

From Wikipedia, the free encyclopedia
百度   毛岳群告诉钱江晚报记者,她不怕死,但怕走后没人照顾刘薇。

Rule-based machine translation (RBMT) is a classical approach of machine translation systems based on linguistic information about source and target languages. Such information is retrieved from (unilingual, bilingual or multilingual) dictionaries and grammars covering the main semantic, morphological, and syntactic regularities of each language. Having input sentences, an RBMT system generates output sentences on the basis of analysis of both the source and the target languages involved. RBMT has been progressively superseded by more efficient methods, particularly neural machine translation.[1]

History

[edit]

The first RBMT systems were developed in the early 1970s. The most important steps of this evolution were the emergence of the following RBMT systems:

Today, other common RBMT systems include:

Types of RBMT

[edit]

There are three different types of rule-based machine translation systems:

  1. Direct Systems (Dictionary Based Machine Translation) map input to output with basic rules.
  2. Transfer RBMT Systems (Transfer Based Machine Translation) employ morphological and syntactical analysis.
  3. Interlingual RBMT Systems (Interlingua) use an abstract meaning.[4][5]

RBMT systems can also be characterized as the systems opposite to Example-based Systems of Machine Translation (Example Based Machine Translation), whereas Hybrid Machine Translations Systems make use of many principles derived from RBMT.

Basic principles

[edit]

The main approach of RBMT systems is based on linking the structure of the given input sentence with the structure of the demanded output sentence, necessarily preserving their unique meaning. The following example can illustrate the general frame of RBMT:

A girl eats an apple. Source Language = English; Demanded Target Language = German

Minimally, to get a German translation of this English sentence one needs:

  1. A dictionary that will map each English word to an appropriate German word.
  2. Rules representing regular English sentence structure.
  3. Rules representing regular German sentence structure.

And finally, we need rules according to which one can relate these two structures together.

Accordingly, we can state the following stages of translation:

1st: getting basic part-of-speech information of each source word:
a = indef.article; girl = noun; eats = verb; an = indef.article; apple = noun
2nd: getting syntactic information about the verb "to eat":
NP-eat-NP; here: eat – Present Simple, 3rd Person Singular, Active Voice
3rd: parsing the source sentence:
(NP an apple) = the object of eat

Often only partial parsing is sufficient to get to the syntactic structure of the source sentence and to map it onto the structure of the target sentence.

4th: translate English words into German
a (category = indef.article) => ein (category = indef.article)
girl (category = noun) => M?dchen (category = noun)
eat (category = verb) => essen (category = verb)
an (category = indef. article) => ein (category = indef.article)
apple (category = noun) => Apfel (category = noun)
5th: Mapping dictionary entries into appropriate inflected forms (final generation):
A girl eats an apple. => Ein M?dchen isst einen Apfel.

Ontologies

[edit]

An ontology is a formal representation of knowledge that includes the concepts (such as objects, processes etc.) in a domain and some relations between them. If the stored information is of linguistic nature, one can speak of a lexicon.[6] In NLP, ontologies can be used as a source of knowledge for machine translation systems. With access to a large knowledge base, rule-based systems can be enabled to resolve many (especially lexical) ambiguities on their own. In the following classic examples, as humans, we are able to interpret the prepositional phrase according to the context because we use our world knowledge, stored in our lexicons:

I saw a man/star/molecule with a microscope/telescope/binoculars.[6]

Since the syntax does not change, a traditional rule-based machine translation system may not be able to differentiate between the meanings. With a large enough ontology as a source of knowledge however, the possible interpretations of ambiguous words in a specific context can be reduced.[6]

Building ontologies

[edit]

The ontology generated for the PANGLOSS knowledge-based machine translation system in 1993 may serve as an example of how an ontology for NLP purposes can be compiled:[7][8]

  • A large-scale ontology is necessary to help parsing in the active modules of the machine translation system.
  • In the PANGLOSS example, about 50,000 nodes were intended to be subsumed under the smaller, manually-built upper (abstract) region of the ontology. Because of its size, it had to be created automatically.
  • The goal was to merge the two resources LDOCE online and WordNet to combine the benefits of both: concise definitions from Longman, and semantic relations allowing for semi-automatic taxonomization to the ontology from WordNet.
    • A definition match algorithm was created to automatically merge the correct meanings of ambiguous words between the two online resources, based on the words that the definitions of those meanings have in common in LDOCE and WordNet. Using a similarity matrix, the algorithm delivered matches between meanings including a confidence factor. This algorithm alone, however, did not match all meanings correctly on its own.
    • A second hierarchy match algorithm was therefore created which uses the taxonomic hierarchies found in WordNet (deep hierarchies) and partially in LDOCE (flat hierarchies). This works by first matching unambiguous meanings, then limiting the search space to only the respective ancestors and descendants of those matched meanings. Thus, the algorithm matched locally unambiguous meanings (for instance, while the word seal as such is ambiguous, there is only one meaning of seal in the animal subhierarchy).
  • Both algorithms complemented each other and helped constructing a large-scale ontology for the machine translation system. The WordNet hierarchies, coupled with the matching definitions of LDOCE, were subordinated to the ontology's upper region. As a result, the PANGLOSS MT system was able to make use of this knowledge base, mainly in its generation element.

Components

[edit]

The RBMT system contains:

  • a SL morphological analyser - analyses a source language word and provides the morphological information;
  • a SL parser - is a syntax analyser which analyses source language sentences;
  • a translator - used to translate a source language word into the target language;
  • a TL morphological generator - works as a generator of appropriate target language words for the given grammatica information;
  • a TL parser - works as a composer of suitable target language sentences;
  • Several dictionaries - more specifically a minimum of three dictionaries:
a SL dictionary - needed by the source language morphological analyser for morphological analysis,
a bilingual dictionary - used by the translator to translate source language words into target language words,
a TL dictionary - needed by the target language morphological generator to generate target language words.[9]

The RBMT system makes use of the following:

  • a Source Grammar for the input language which builds syntactic constructions from input sentences;
  • a Source Lexicon which captures all of the allowable vocabulary in the domain;
  • Source Mapping Rules which indicate how syntactic heads and grammatical functions in the source language are mapped onto domain concepts and semantic roles in the interlingua;
  • a Domain Model/Ontology which defines the classes of domain concepts and restricts the fillers of semantic roles for each class;
  • Target Mapping Rules which indicate how domain concepts and semantic roles in the interlingua are mapped onto syntactic heads and grammatical functions in the target language;
  • a Target Lexicon which contains appropriate target lexemes for each domain concept;
  • a Target Grammar for the target language which realizes target syntactic constructions as linearized output sentences.[10]

Advantages

[edit]
  • No bilingual texts are required. This makes it possible to create translation systems for languages that have no texts in common, or even no digitized data whatsoever.
  • Domain independent. Rules are usually written in a domain independent manner, so the vast majority of rules will "just work" in every domain, and only a few specific cases per domain may need rules written for them.
  • No quality ceiling. Every error can be corrected with a targeted rule, even if the trigger case is extremely rare. This is in contrast to statistical systems where infrequent forms will be washed away by default.
  • Total control. Because all rules are hand-written, you can easily debug a rule-based system to see exactly where a given error enters the system, and why.
  • Reusability. Because RBMT systems are generally built from a strong source language analysis that is fed to a transfer step and target language generator, the source language analysis and target language generation parts can be shared between multiple translation systems, requiring only the transfer step to be specialized. Additionally, source language analysis for one language can be reused to bootstrap a closely related language analysis.

Shortcomings

[edit]
  • Insufficient amount of really good dictionaries. Building new dictionaries is expensive.
  • Some linguistic information still needs to be set manually.
  • It is hard to deal with rule interactions in big systems, ambiguity, and idiomatic expressions.
  • Failure to adapt to new domains. Although RBMT systems usually provide a mechanism to create new rules and extend and adapt the lexicon, changes are usually very costly and the results, frequently, do not pay off.[11]

References

[edit]
  1. ^ Wang, Haifeng; Wu, Hua; He, Zhongjun; Huang, Liang; Church, Kenneth Ward (2025-08-05). "Progress in Machine Translation". Engineering. ISSN 2095-8099.
  2. ^ "MT Software". AAMT. Archived from the original on 2025-08-05.
  3. ^ "MACHINE TRANSLATION IN JAPAN". www.wtec.org. January 1992. Archived from the original on 2025-08-05.
  4. ^ Koehn, Philipp (2010). Statistical Machine Translation. Cambridge: Cambridge University Press. p. 15. ISBN 9780521874151.
  5. ^ Nirenburg, Sergei (1989). "Knowledge-Based Machine Translation". Machine Trandation 4 (1989), 5 - 24. 4 (1). Kluwer Academic Publishers: 5–24. JSTOR 40008396.
  6. ^ a b c Vossen, Piek: Ontologies. In: Mitkov, Ruslan (ed.) (2003): Handbook of Computational Linguistics, Chapter 25. Oxford: Oxford University Press.
  7. ^ Knight, Kevin (1993). "Building a Large Ontology for Machine Translation". Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21–24, 1993. Princeton, New Jersey: Association for Computational Linguistics. pp. 185–190. doi:10.3115/1075671.1075713. ISBN 978-1-55860-324-0.
  8. ^ Knight, Kevin; Luk, Steve K. (1994). Building a Large-Scale Knowledge Base for Machine Translation. Paper presented at the Twelfth National Conference on Artificial Intelligence. arXiv:cmp-lg/9407029.
  9. ^ Hettige, B.; Karunananda, A.S. (2011). "Computational Model of Grammar for English to Sinhala Machine Translation". 2011 International Conference on Advances in ICT for Emerging Regions (ICTer). pp. 26–31. doi:10.1109/ICTer.2011.6075022. ISBN 978-1-4577-1114-5. S2CID 45871137.
  10. ^ Lonsdale, Deryle; Mitamura, Teruko; Nyberg, Eric (1995). "Acquisition of Large Lexicons for Practical Knowledge-Based MT". Machine Translation. 9 (3–4). Kluwer Academic Publishers: 251–283. doi:10.1007/BF00980580. S2CID 1106335.
  11. ^ Lagarda, A.-L.; Alabau, V.; Casacuberta, F.; Silva, R.; Díaz-de-Lia?o, E. (2009). "Statistical Post-Editing of a Rule-Based Machine Translation System" (PDF). Proceedings of NAACL HLT 2009: Short Papers, pages 217–220, Boulder, Colorado. Association for Computational Linguistics. Retrieved 20 June 2012.

Literature

[edit]
  • Arnold, D.J. et al. (1993): Machine Translation: an Introductory Guide
  • Hutchins, W.J. (1986): Machine Translation: Past, Present, Future
[edit]
过氧化氢浓度阳性是什么意思 脾虚挂什么科 1月18日什么星座 痔疮是什么原因引起的 牙结石有什么危害
明天是什么节日 肺结核复发有什么症状 七月份能种什么菜 血管瘤有什么危害吗 剌是什么意思
脚底起泡是什么原因 舌炎是什么症状 肝火旺吃什么调理 带状疱疹后遗神经痛用什么药 kangol是什么牌子
什么叫捞女 牛肉跟什么炒好吃 什么人不能吃海参 可怜巴巴是什么意思 为什么订婚后容易分手
波罗蜜多什么意思hcv9jop6ns9r.cn 进国企需要什么条件jingluanji.com 属马的跟什么属相最配hcv8jop8ns9r.cn 今年54岁属什么生肖hcv7jop6ns5r.cn 串门是什么意思wuhaiwuya.com
AG是什么hcv8jop2ns5r.cn 下雨天适合吃什么hcv9jop1ns9r.cn 语塞是什么意思hcv8jop7ns1r.cn 犹太人是什么hcv7jop4ns5r.cn 麻药叫什么名字hcv7jop6ns0r.cn
acg文化是什么意思hcv8jop4ns1r.cn 什么石穿fenrenren.com 空调多少匹是什么意思hcv8jop5ns1r.cn 多囊为什么要跳绳而不是跑步hcv9jop7ns1r.cn 缺铁性贫血严重会导致什么后果shenchushe.com
cdr是什么意思hcv8jop8ns2r.cn 99年的兔是什么命hcv9jop5ns9r.cn 岁次什么意思hcv8jop9ns5r.cn 裙带菜不能和什么一起吃weuuu.com 肝火吃什么药hcv8jop3ns5r.cn
百度