又热又冷是什么原因| 中国最长的河流是什么河| 入睡困难吃什么药| 什么叫静脉曲张| 乌龟和鳖有什么区别| 青蛙吃什么东西| 男性做彩超要检查什么| 降真香是什么| ky什么意思| 敬请是什么意思| 跳蛋什么感觉| 女生胸部长什么样| 祖坟冒青烟是什么意思| 猫为什么不怕蛇| 湿气重吃什么调理| 鸡肚是什么部位| 节育环要什么时候取才是最佳时期| 脸上长痣是什么原因造成的| 孕妇可以喝什么汤| smt是什么| 甲亢是什么原因造成的| 物色是什么意思| 什么是脑白质病| 七月与安生讲的是什么| 做是什么感觉| 什么是血小板| 痰是棕色的是什么原因| 1990年属马的是什么命| 黄花菜长什么样子| 为什么午睡起来会头疼| 什么是护理学| 吃什么补大脑记忆力| 塑料袋是什么材质| 算了是什么意思| 儿童红眼病用什么眼药水| 缺钾会出现什么症状| 手疼挂什么科| 吃白饭是什么意思| cu是什么元素| 韬光养晦下一句是什么| 什么时候测血压最准确| 痴男怨女是什么意思| 苦荞茶喝了有什么好处| 医院介入科是干什么的| 为什么会出现眼袋| 防晒衣的面料是什么| 卡布奇诺是什么意思| 什么是三观不合| 勤劳的小蜜蜂什么意思| 智齿吃什么消炎药| 钡餐检查能查出什么| 黑眼圈看什么科| 手机为什么会发热| 白茶属于什么茶| 接吻会传染什么病| 男性吃什么可以壮阳| 女生喜欢什么姿势| 什么是蝴蝶效应| 血管瘤是什么引起的| 脚常抽筋是什么原因| 孕期什么时候补钙| 割礼是什么| 明鉴是什么意思| 停职是什么意思| 母仪天下什么意思| 舌苔白厚吃什么药见效快| 为什么会得水痘| 双插头是什么意思| 泌乳素高有什么症状| 什么叫有机| 吃什么清肺效果最好| 人走了说什么安慰的话| 胸上长痘痘是什么原因| 政字五行属什么| 狐臭用什么药最好| 右鼻子经常出血是什么原因| 益母草煮鸡蛋有什么功效| 射手男和什么星座最配| 卵巢畸胎瘤是什么病| 鸡属于什么动物| 腹肌不对称是什么原因| 喝莓茶有什么好处| 耳朵软骨疼是什么原因| 五行白色属什么| 红色的对比色是什么颜色| 五指毛桃有什么功效| 验尿白细胞高是什么原因| 粘土是什么土| 鼻子毛白了是什么原因| 土中金是什么生肖| 什么是人彘| 梦见看房子是什么预兆| 泪河高度说明什么| 嫌恶是什么意思| 儿童口臭什么原因引起的| 梦见和死去的人说话是什么意思| 梦见穿山甲预示着什么| 子宫出血是什么原因造成的| 音爆是什么| 小孩手上脱皮是什么原因| 窦性心律过速吃什么药| 胆小如鼠的意思是什么| 咕咕咕咕叫是什么鸟| 什么情况下做心脏造影| 睡眠不好用什么泡脚| rice什么意思| 脑门出汗多是什么原因| 拉肚子适合吃什么| 网络拒绝接入什么意思| 牙齿痛吃什么| 上海的特产是什么| 脚背上长痣代表什么| 什么的恐龙| 来日方长什么意思| 脑出血挂什么科| 大便很黄是什么原因| 鳄鱼的天敌是什么动物| 梦到自己的妈妈死了是什么意思| 结节钙化是什么意思| 新陈代谢慢是什么原因| 花木兰姓什么| 膝关节疼痛用什么药效果最好| 白天尿少晚上尿多什么原因| 什么病可以申请低保| 啤酒加味精有什么作用| 发高烧是什么原因引起的| 代沟是什么| 桃花眼是什么意思| 肾结石什么引起的| 阿魏酸是什么| 鸡眼是什么| 热痱子用什么药| 卵巢是什么| 冷冻跟冷藏有什么区别| 什么笑脸| 活检是什么| 专硕和学硕有什么区别| 有始无终是什么生肖| 西洋参吃了有什么好处| 失落是什么意思| 肾主骨是什么意思| 人体有365个什么| 行经是什么意思| 借记卡是什么卡| 有黄痰是什么原因| 绮罗是什么意思| 支原体培养阳性是什么意思| 治胃病吃什么药| 闲云野鹤指什么生肖| 梦见买鸡蛋是什么意思周公解梦| 小孩腹泻吃什么药好得快| 老树盘根是什么意思| 贫血吃什么食物| aqua是什么牌子| 延长收货是什么意思| 禁忌什么意思| 什么是用户名| 蓝莓什么季节成熟| 为什么耳朵总是嗡嗡响| 为什么闰月| 泡妞是什么意思啊| 甲状腺4a是什么意思| 高抬贵手是什么意思| 胸前骨头疼是什么原因| 春秋是什么时期| 高血糖可以吃什么| 虫字旁的字和什么有关| 什么叫物质女人| 数目是什么意思| 轻微食物中毒吃什么药| 什么水果是碱性的| 武警和特警有什么区别| 绿豆和什么一起煮好| 爸爸是什么意思| 后脑勺疼什么原因| 低钙血症是什么意思| 睡觉头晕是什么原因引起的| 退步是什么意思| 脚踝肿是什么原因引起的| 小暑吃什么食物| 梦见很多坟墓是什么意思| 九五年属什么| 吸允的读音是什么| 口臭严重是什么原因| 什么人不建议吃海参| 什么药可以延长时间| 为什么会得盆腔炎| 白酒都有什么香型| 液氨是什么| 家慈是什么意思| 黄河水为什么是黄的| 促排卵吃什么药| 黥面是什么意思| 吃什么头发长的快| 沧海遗珠是什么意思| 金蝉是什么| 三斤八两什么意思| 上皮细胞一个加号什么意思| 吃什么可以快速美白| 子宫肌瘤挂什么科室| 牙龈肿痛吃什么药效果好| 吐血挂什么科| 阴虱什么症状| 什么人不能念阿弥陀佛| 空调一级能效什么意思| 天庭饱满是什么意思| 彩超和ct有什么区别| 回民为什么不吃猪肉| 全血是什么意思| 做梦梦到大蟒蛇是什么意思| 心态崩了什么意思| 女人白带多是什么原因| 吃什么抗衰老| 防冻液红色和绿色有什么区别| 泌乳素高有什么症状表现| 股骨头坏死挂什么科| 脾脏结节一般是什么病| 1972年属鼠的是什么命| 根基是什么意思| 小便绿色是什么原因| 喝山楂水有什么好处和坏处| 做梦烧纸钱什么意思| elisa是什么检测方法| uhd是什么意思| 如果你是什么就什么造句| 喝什么降血压| 一个月来两次大姨妈是什么原因| tap是什么意思| 肾钙化是什么意思| 什么样的花朵| 硫是什么颜色| 奕什么意思| py交易是什么意思| 慢性非萎缩性胃炎吃什么药效果好| 嗔什么意思| 农历是什么生肖| 什么水果最好吃| 四个木是什么字| 泡打粉可以用什么代替| 孩子鼻子流鼻血是什么原因| 芹菜炒什么| 经常感觉饿是什么原因| m是什么| 家里飞蛾多是什么原因| 脆生生是什么意思| 大姨妈来了喝红糖水有什么功效| 花胶有什么功效与作用| 生理盐水是什么东西| 心功能不全是什么意思| 肺结节不能吃什么食物| 桦树茸的功效主治什么病| 7月17什么星座| 甄别是什么意思| 1968年什么时候退休| 黑眼圈挂什么科| 夏天流鼻血是什么原因| jsdun是什么牌子的手表| 嬴政姓什么| 农历12月是什么星座| 什么的眼光| ads是什么| 满身红点是什么病| 日什么月什么| 夏天适合吃什么水果| 百度Jump to content

human是什么意思

From Wikipedia, the free encyclopedia
百度 较之以往,红网新首页主要有五大改变:一是主色调由过去的蓝色改为现在的红色,紧扣红网“红”色主题;二是顺应电脑宽屏化趋势,由过去的窄屏改为现在的宽屏;三是在原来一个大头条的基础上增加了三个小头条,分别关注厅局、地市和县市区,形成重视高层也关注基层的立体新闻传播格局;四是新增加了网闻联播端口,通过网络视听,全面推荐湖南各地基层情况;五是增加了“论道湖南”、“舆情观察”两个新栏目,强化“问政湖南”栏目。

Friendly artificial intelligence (friendly AI or FAI) is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests such as fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Etymology and usage

[edit]
Eliezer Yudkowsky, AI researcher and creator of the term

The term was coined by Eliezer Yudkowsky,[1] who is best known for popularizing the idea,[2][3] to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:[2]

Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.

"Friendly" is used in this context as technical terminology, and picks out agents that are safe and useful, not necessarily ones that are "friendly" in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society.[4]

Risks of unfriendly AI

[edit]

The roots of concern about artificial intelligence are very old. Kevin LaGrandeur showed that the dangers specific to AI can be seen in ancient literature concerning artificial humanoid servants such as the golem, or the proto-robots of Gerbert of Aurillac and Roger Bacon. In those stories, the extreme intelligence and power of these humanoid creations clash with their status as slaves (which by nature are seen as sub-human), and cause disastrous conflict.[5] By 1942 these themes prompted Isaac Asimov to create the "Three Laws of Robotics"—principles hard-wired into all the robots in his fiction, intended to prevent them from turning on their creators, or allowing them to come to harm.[6]

In modern times as the prospect of superintelligent AI looms nearer, philosopher Nick Bostrom has said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way:

Basically we should assume that a 'superintelligence' would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is 'human friendly.'

In 2008, Eliezer Yudkowsky called for the creation of "friendly AI" to mitigate existential risk from advanced artificial intelligence. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[7]

Steve Omohundro says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of basic "drives", such as resource acquisition, self-preservation, and continuous self-improvement, because of the intrinsic nature of any goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.[8][9]

Alexander Wissner-Gross says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.[10][11]

Luke Muehlhauser, writing for the Machine Intelligence Research Institute, recommends that machine ethics researchers adopt what Bruce Schneier has called the "security mindset": Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm.[12]

In 2014, Luke Muehlhauser and Nick Bostrom underlined the need for 'friendly AI';[13] nonetheless, the difficulties in designing a 'friendly' superintelligence, for instance via programming counterfactual moral thinking, are considerable.[14][15]

Coherent extrapolated volition

[edit]

Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted".[16]

Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first study human nature and then produce the AI that humanity would want, given sufficient time and insight, to arrive at a satisfactory answer.[16] The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.

Other approaches

[edit]

Steve Omohundro has proposed a "scaffolding" approach to AI safety, in which one provably safe AI generation helps build the next provably safe generation.[17]

Seth Baum argues that the development of safe, socially beneficial artificial intelligence or artificial general intelligence is a function of the social psychology of AI research communities and so can be constrained by extrinsic measures and motivated by intrinsic measures. Intrinsic motivations can be strengthened when messages resonate with AI developers; Baum argues that, in contrast, "existing messages about beneficial AI are not always framed well". Baum advocates for "cooperative relationships, and positive framing of AI researchers" and cautions against characterizing AI researchers as "not want(ing) to pursue beneficial designs".[18]

In his book Human Compatible, AI researcher Stuart J. Russell lists three principles to guide the development of beneficial machines. He emphasizes that these principles are not meant to be explicitly coded into the machines; rather, they are intended for the human developers. The principles are as follows:[19]:?173?

  1. The machine's only objective is to maximize the realization of human preferences.
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behavior.

The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future."[19]:?173? Similarly, "behavior" includes any choice between options,[19]:?177? and the uncertainty is such that some probability, which may be quite small, must be assigned to every logically possible human preference.[19]:?201?

Public policy

[edit]

James Barrat, author of Our Final Invention, suggested that "a public-private partnership has to be created to bring A.I.-makers together to share ideas about security—something like the International Atomic Energy Agency, but in partnership with corporations." He urges AI researchers to convene a meeting similar to the Asilomar Conference on Recombinant DNA, which discussed risks of biotechnology.[17]

John McGinnis encourages governments to accelerate friendly AI research. Because the goalposts of friendly AI are not necessarily eminent, he suggests a model similar to the National Institutes of Health, where "Peer review panels of computer and cognitive scientists would sift through projects and choose those that are designed both to advance AI and assure that such advances would be accompanied by appropriate safeguards." McGinnis feels that peer review is better "than regulation to address technical issues that are not possible to capture through bureaucratic mandates". McGinnis notes that his proposal stands in contrast to that of the Machine Intelligence Research Institute, which generally aims to avoid government involvement in friendly AI.[20]

Criticism

[edit]

Some critics believe that both human-level AI and superintelligence are unlikely and that, therefore, friendly AI is unlikely. Writing in The Guardian, Alan Winfield compares human-level artificial intelligence with faster-than-light travel in terms of difficulty and states that while we need to be "cautious and prepared" given the stakes involved, we "don't need to be obsessing" about the risks of superintelligence.[21] Boyles and Joaquin, on the other hand, argue that Luke Muehlhauser and Nick Bostrom’s proposal to create friendly AIs appear to be bleak. This is because Muehlhauser and Bostrom seem to hold the idea that intelligent machines could be programmed to think counterfactually about the moral values that human beings would have had.[13] In an article in AI & Society, Boyles and Joaquin maintain that such AIs would not be that friendly considering the following: the infinite amount of antecedent counterfactual conditions that would have to be programmed into a machine, the difficulty of cashing out the set of moral values—that is, those that are more ideal than the ones human beings possess at present, and the apparent disconnect between counterfactual antecedents and ideal value consequent.[14]

Some philosophers claim that any truly "rational" agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful.[22] Other critics question whether artificial intelligence can be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis, say that it will be impossible ever to guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work "only when one has not only great powers of prediction about the likelihood of myriad possible outcomes but certainty and consensus on how one values the different outcomes.[23]

The inner workings of advanced AI systems may be complex and difficult to interpret, leading to concerns about transparency and accountability.[24]

See also

[edit]

References

[edit]
  1. ^ Tegmark, Max (2014). "Life, Our Universe and Everything". Our Mathematical Universe: My Quest for the Ultimate Nature of Reality (First ed.). Knopf Doubleday Publishing. ISBN 9780307744258. Its owner may cede control to what Eliezer Yudkowsky terms a "Friendly AI,"...
  2. ^ a b Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  3. ^ Leighton, Jonathan (2011). The Battle for Compassion: Ethics in an Apathetic Universe. Algora. ISBN 978-0-87586-870-7.
  4. ^ Wallach, Wendell; Allen, Colin (2009). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, Inc. ISBN 978-0-19-537404-9.
  5. ^ Kevin LaGrandeur (2011). "The Persistent Peril of the Artificial Slave". Science Fiction Studies. 38 (2): 232. doi:10.5621/sciefictstud.38.2.0232. Archived from the original on January 13, 2023. Retrieved May 6, 2013.
  6. ^ Isaac Asimov (1964). "Introduction". The Rest of the Robots. Doubleday. ISBN 0-385-09041-2. {{cite book}}: ISBN / Date incompatibility (help)
  7. ^ Eliezer Yudkowsky (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF). In Nick Bostrom; Milan M. ?irkovi? (eds.). Global Catastrophic Risks. pp. 308–345. Archived (PDF) from the original on October 19, 2013. Retrieved October 19, 2013.
  8. ^ Omohundro, S. M. (February 2008). "The basic AI drives". Artificial General Intelligence. 171: 483–492. CiteSeerX 10.1.1.393.8356.
  9. ^ Bostrom, Nick (2014). "Chapter 7: The Superintelligent Will". Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. ISBN 9780199678112.
  10. ^ Dvorsky, George (April 26, 2013). "How Skynet Might Emerge From Simple Physics". Gizmodo. Archived from the original on October 8, 2021. Retrieved December 23, 2021.
  11. ^ Wissner-Gross, A. D.; Freer, C. E. (2013). "Causal entropic forces". Physical Review Letters. 110 (16): 168702. Bibcode:2013PhRvL.110p8702W. doi:10.1103/PhysRevLett.110.168702. hdl:1721.1/79750. PMID 23679649.
  12. ^ Muehlhauser, Luke (July 31, 2013). "AI Risk and the Security Mindset". Machine Intelligence Research Institute. Archived from the original on July 19, 2014. Retrieved July 15, 2014.
  13. ^ a b Muehlhauser, Luke; Bostrom, Nick (December 17, 2013). "Why We Need Friendly AI". Think. 13 (36): 41–47. doi:10.1017/s1477175613000316. ISSN 1477-1756. S2CID 143657841.
  14. ^ a b Boyles, Robert James M.; Joaquin, Jeremiah Joven (July 23, 2019). "Why friendly AIs won't be that friendly: a friendly reply to Muehlhauser and Bostrom". AI & Society. 35 (2): 505–507. doi:10.1007/s00146-019-00903-0. ISSN 0951-5666. S2CID 198190745.
  15. ^ Chan, Berman (March 4, 2020). "The rise of artificial intelligence and the crisis of moral passivity". AI & Society. 35 (4): 991–993. doi:10.1007/s00146-020-00953-9. ISSN 1435-5655. S2CID 212407078. Archived from the original on February 10, 2023. Retrieved January 21, 2023.
  16. ^ a b Eliezer Yudkowsky (2004). "Coherent Extrapolated Volition" (PDF). Singularity Institute for Artificial Intelligence. Archived (PDF) from the original on September 30, 2015. Retrieved September 12, 2015.
  17. ^ a b Hendry, Erica R. (January 21, 2014). "What Happens When Artificial Intelligence Turns On Us?". Smithsonian Magazine. Archived from the original on July 19, 2014. Retrieved July 15, 2014.
  18. ^ Baum, Seth D. (September 28, 2016). "On the promotion of safe and socially beneficial artificial intelligence". AI & Society. 32 (4): 543–551. doi:10.1007/s00146-016-0677-0. ISSN 0951-5666. S2CID 29012168.
  19. ^ a b c d Russell, Stuart (October 8, 2019). Human Compatible: Artificial Intelligence and the Problem of Control. United States: Viking. ISBN 978-0-525-55861-3. OCLC 1083694322.
  20. ^ McGinnis, John O. (Summer 2010). "Accelerating AI". Northwestern University Law Review. 104 (3): 1253–1270. Archived from the original on December 1, 2014. Retrieved July 16, 2014.
  21. ^ Winfield, Alan (August 9, 2014). "Artificial intelligence will not turn into a Frankenstein's monster". The Guardian. Archived from the original on September 17, 2014. Retrieved September 17, 2014.
  22. ^ Kornai, András (May 15, 2014). "Bounding the impact of AGI". Journal of Experimental & Theoretical Artificial Intelligence. 26 (3). Informa UK Limited: 417–438. doi:10.1080/0952813x.2014.895109. ISSN 0952-813X. S2CID 7067517. ...the essence of AGIs is their reasoning facilities, and it is the very logic of their being that will compel them to behave in a moral fashion... The real nightmare scenario (is one where) humans find it advantageous to strongly couple themselves to AGIs, with no guarantees against self-deception.
  23. ^ Keiper, Adam; Schulman, Ari N. (Summer 2011). "The Problem with 'Friendly' Artificial Intelligence". The New Atlantis. No. 32. pp. 80–89. Archived from the original on January 15, 2012. Retrieved January 16, 2012.
  24. ^ Norvig, Peter; Russell, Stuart (2010). Artificial Intelligence: A Modern Approach (3rd ed.). Pearson. ISBN 978-0136042594.

Further reading

[edit]
  • Yudkowsky, E. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global Catastrophic Risks, Oxford University Press.
    Discusses Artificial Intelligence from the perspective of Existential risk. In particular, Sections 1-4 give background to the definition of Friendly AI in Section 5. Section 6 gives two classes of mistakes (technical and philosophical) which would both lead to the accidental creation of non-Friendly AIs. Sections 7-13 discuss further related issues.
  • Omohundro, S. (2008). The Basic AI Drives Appeared in AGI-08 – Proceedings of the First Conference on Artificial General Intelligence.
  • Mason, C. (2008). Human-Level AI Requires Compassionate Intelligence Archived 2025-08-06 at the Wayback Machine Appears in AAAI 2008 Workshop on Meta-Reasoning: Thinking About Thinking.
  • Froding, B. and Peterson, M. (2021). Friendly AI Ethics and Information Technology, Vol. 23, pp. 207–214.
[edit]
为什么不能叫醒梦游的人 患难见真情的上一句是什么 落汤鸡是什么意思 安乐片是什么药 未见明显胚芽是什么意思
和谐什么意思 朝鲜为什么闭关锁国 阑尾炎属于什么科室 亭亭净植是什么意思 心率过低吃什么药
rad是什么意思 相知相惜是什么意思 属兔适合佩戴什么饰品 火葬场是什么生肖 粽叶是什么植物
兔跟什么生肖配对最好 美国为什么不建高铁 颈椎病看什么科最好 不撞南墙不回头是什么意思 农转非是什么意思
地位是什么意思qingzhougame.com 前庭大腺囊肿是什么原因引起的wuhaiwuya.com 血脂稠是什么原因造成的hcv8jop8ns7r.cn 脸大适合什么发型hcv8jop9ns7r.cn u盾是什么hcv8jop4ns0r.cn
考是什么意思hcv7jop7ns3r.cn 加湿器什么季节用最好hcv8jop7ns9r.cn 甲状腺过氧化物酶抗体高说明什么hcv8jop6ns7r.cn 经常手淫会有什么危害hcv9jop1ns4r.cn 鲸鱼属于什么类动物hcv9jop2ns5r.cn
胚由什么组成hcv8jop0ns2r.cn 血糖低会出现什么症状hcv7jop6ns7r.cn 紫藤什么时候开花hcv9jop1ns0r.cn 换手率高说明什么hcv9jop3ns2r.cn 普通健康证都检查什么shenchushe.com
什么动物站着睡觉hcv8jop3ns5r.cn 女人人中深代表什么hcv9jop5ns9r.cn 92年五行属什么hcv8jop2ns5r.cn 胸变大是什么原因fenrenren.com 鹅蛋不能和什么一起吃hcv9jop2ns8r.cn
百度