荨麻疹什么样| 腿发热是什么原因引起的| 无可厚非什么意思| 原住民是什么意思| 左边脸长痘痘是什么原因| 骨折吃什么水果| oo什么意思| 血糖高忌吃什么| 为什么不能空腹喝豆浆| 痰有腥臭味是什么原因| 三叉神经痛挂什么科| 梦见老婆出轨是什么预兆| 1月1号是什么星座| 天秤座的幸运色是什么| nicu是什么意思| 苏小小属什么生肖| 肝脏在什么位置| 婊子是什么生肖| 空调自动关机什么原因| 处女膜什么样子| 抽血后头晕是什么原因| 青光眼是什么原因引起的| 额头爱出汗是什么原因| 妈妈的表姐叫什么| 神经性皮炎是什么| 梦见蛇是什么预兆| 奔波是什么意思| 尿毒症是什么| 今年78岁属什么生肖| 广东菜心是什么菜| 丹毒病是什么原因引起的| 腰疼是什么原因引起的女性| 灵芝有什么作用| 谷维素是什么| 只吐不拉是什么原因| 腹泻吃什么药见效最快| 低血压挂什么科| 暖气是什么症状| 坐月子送什么礼物好| 心脏不舒服有什么症状| hr是什么牌子| 保家仙是什么意思| 子宫肌瘤伴钙化是什么意思| 吃什么清肝火最快| 新疆人为什么不吃猪肉| 5月23日是什么日子| 安抚奶嘴什么时候戒掉| 0r是什么意思| 室内机漏水是什么原因| 什么叫做罹患疾病| 冷暖自知是什么意思| 一个虫一个圣读什么| 虫可念什么| 泡脚时间长了有什么坏处| 王字旁的字跟什么有关| 药流后需要注意什么| kamagra是什么药| 前列腺炎忌口什么食物| 知道是什么意思| 玩得什么| 巴字加一笔是什么字| 养肝护肝吃什么食物好| 早上九点半是什么时辰| kb是什么| 男生来大姨夫是什么意思| 土霉素喂鸡有什么作用| 转氨酶偏高是什么原因| ABA是什么植物激素| 妇科检查清洁度二度是什么意思| 为什么腿会肿| 五指毛桃有什么功效| adh医学上是什么意思| 血脂高适合吃什么食物| 儿童喝蜂蜜水有什么好处和坏处| 什么小兔| 舌尖发麻是什么病的前兆| 泛醇是什么| 小巴西龟吃什么食物| 为什么明星不激光祛斑| 1990年1月属什么生肖| 心脏病有什么症状表现| 这次是我真的决定离开是什么歌| 指甲分层是什么原因| 身不由己是什么生肖| 麻辣拌里面都有什么菜| 后背酸痛是什么原因| 高胆固醇血症是什么病| 什么生肖站着睡觉| 吃什么不会胖又减肥| 盆腔钙化灶是什么意思| 血脂四项包括什么| 经常说梦话是什么原因| nh3是什么| coco什么意思| 腋臭去医院挂什么科| 办护照需要什么| 睡眠不好是什么原因引起的| 宝宝经常发烧是什么原因引起的| 子宫内膜16mm说明什么| 身上发抖是什么原因| 今天什么节日| 生肖马和什么生肖最配| 蕾丝边是指什么意思| 小腹疼挂什么科| 82年是什么年| 双喜临门指什么生肖| 高考分数什么时候出来| 泌尿系彩超主要是检查什么| 孔雀蓝配什么颜色好看| 天秤女喜欢什么样的男生| 右手臂痛是什么预兆| 谦虚的近义词是什么| 鱼用什么呼吸| 什么津乐道| 杭州有什么景点| 驾临是什么意思| 什么动物没有骨头| 强劲的动物是什么生肖| 年纪是什么意思| 小腿疼痛为什么| 什么淀粉最好| 蘑菇什么季节长出来| 红颜知己是什么| 玉谷叶是什么植物| 两肺纹理增粗是什么意思| 83年五行属什么| 茯苓生长在什么地方| 心存芥蒂是什么意思| osprey是什么牌子| 周围型肺ca是什么意思| 消化道出血吃什么药| 电解水是什么水| 查幽门螺杆菌挂什么科| 紧张的反义词是什么| 皮肤脱皮是什么原因| 落花雨你飘摇的美丽是什么歌| 指甲黑线是什么原因| 亲亲抱抱举高高什么意思| 救赎什么意思| 100元人民币什么时候发行的| 喉咙痛吃什么饭菜好| 私处痒用什么药| 美团是干什么的| 口干嗓子干是什么原因| 千里江陵是什么意思| 女生下体长什么样| 横行霸道的意思是什么| 无动于衷什么意思| 头昏应该挂什么科| 为什么高铁没有e座| 什么运动使人脸部年轻| 什么竹子| 梦见手链断了是什么意思| 异地办理临时身份证需要什么材料| 常吃阿司匹林有什么副作用| 7月一日是什么节日| 防冻液红色和绿色有什么区别| 熬夜吃什么维生素| 刮宫后需要注意什么| 喜面是什么意思| 个性化是什么意思| 睡觉总是做梦是什么原因| 脚麻是什么原因造成的| 血管钙化是什么意思| 拔罐为什么会起水泡| 鸡茸是什么东西| 攻受是什么意思| 早上6点是什么时辰| 肾功能不好吃什么药调理| 承字属于五行属什么| 萎缩性胃炎不能吃什么食物| 微量蛋白尿高说明什么| 什么手机性价比高| 糖尿病可以吃什么零食| 预谋是什么意思| 阑尾炎什么症状表现| 什么样的闪电| 鸡蛋属于什么类食品| 幽门螺旋杆菌抗体阳性是什么意思| 鹦鹉鱼能和什么鱼混养| mon什么意思| 糖筛和糖耐有什么区别| 一生一世是什么意思| 巨大的什么| 为什么癌症治不好| 李健为什么退出水木年华| 掉眉毛是什么原因| 心脏主要由什么组织构成| 处心积虑什么意思| 你算什么东西| 中国民间为什么要吃腊八粥| 梅毒螺旋体抗体阳性是什么意思| 被香灰烫了预示着什么| 经常拉肚子什么原因| 孔子孟子什么关系| 2岁打什么疫苗| psd是什么意思| 痧是什么| 阎维文什么军衔| 方圆脸适合什么发型| 梦见监狱是什么意思| 抑郁症的表现是什么| adh是什么激素| 睡觉打呼噜什么原因| 老被蚊子咬是什么原因| 父爱是什么| 脱毛膏是什么原理| 空心是什么意思| 冠脉ct和冠脉造影有什么区别| 毕生是什么意思| 盆浴是什么意思| 1997年7月1日属什么生肖| 射手座有什么特点| 孕晚期吃什么好| 子宫肌瘤手术后吃什么好| 初代是什么意思| 金刚菩提是什么植物的种子| 有龙则灵的灵是什么意思| 甲状腺有什么危害| 猫的眼睛晚上为什么会发光| 什么是皮包公司| 莫名其妙的名是什么意思| 什么止痛药效果最好| 无用功是什么意思| 梦到怀孕生孩子是什么意思| 人造海蜇丝是什么做的| 谷雨是什么季节| 朗字五行属什么| 鼻子上火是什么原因引起的| 胆固醇高吃什么食物最好| 什么是前鼻音和后鼻音| 孩子皮肤黑是什么原因| 乳清粉是什么东西| 儿童经常流鼻血什么原因造成的| 人生开挂是什么意思| 什么的态度| 生育保险有什么用| 看不上是什么意思| 乌冬是什么| 松花粉是什么| 散光是什么症状| 山竹不能和什么一起吃| 大山羊是什么病| 急性乳腺炎是什么原因引起的| 痱子粉和爽身粉有什么区别| 低蛋白血症是什么病| 学子是什么意思| 孕激素六项检查什么时候做| 女性内分泌失调吃什么药| 日柱金舆是什么意思| 杧果是什么| 三七长什么样| 嗓子痒咳嗽是什么原因| 松鼠吃什么| 弥漫性病变是什么意思| 你在纠结什么| 高血压检查什么项目| 天意不可违是什么意思| 胃不好吃什么| 冬至要注意什么| 天生一对成伴侣是什么生肖| 胃窦炎是什么原因引起的| 赫是什么意思| 1955年是什么年| 百度Jump to content

引领世界潮流的航标——习近平主席推动构建人类命运共同体的时代启示

From Wikipedia, the free encyclopedia
百度 回望历史,这类事件不胜枚举。

The basic scheme of a variational autoencoder. The model receives as input. The encoder compresses it into the latent space. The decoder receives as input the information sampled from the latent space and produces as similar as possible to .

In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling.[1] It is part of the families of probabilistic graphical models and variational Bayesian methods.[2]

In addition to being seen as an autoencoder neural network architecture, variational autoencoders can also be studied within the mathematical formulation of variational Bayesian methods, connecting a neural encoder network to its decoder through a probabilistic latent space (for example, as a multivariate Gaussian distribution) that corresponds to the parameters of a variational distribution.

Thus, the encoder maps each point (such as an image) from a large complex dataset into a distribution within the latent space, rather than to a single point in that space. The decoder has the opposite function, which is to map from the latent space to the input space, again according to a distribution (although in practice, noise is rarely added during the decoding stage). By mapping a point to a distribution instead of a single point, the network can avoid overfitting the training data. Both networks are typically trained together with the usage of the reparameterization trick, although the variance of the noise model can be learned separately.[citation needed]

Although this type of model was initially designed for unsupervised learning,[3][4] its effectiveness has been proven for semi-supervised learning[5][6] and supervised learning.[7]

Overview of architecture and operation

[edit]

A variational autoencoder is a generative model with a prior and noise distribution respectively. Usually such models are trained using the expectation-maximization meta-algorithm (e.g. probabilistic PCA, (spike & slab) sparse coding). Such a scheme optimizes a lower bound of the data likelihood, which is usually computationally intractable, and in doing so requires the discovery of q-distributions, or variational posteriors. These q-distributions are normally parameterized for each individual data point in a separate optimization process. However, variational autoencoders use a neural network as an amortized approach to jointly optimize across data points. In that way, the same parameters are reused for multiple data points, which can result in massive memory savings. The first neural network takes as input the data points themselves, and outputs parameters for the variational distribution. As it maps from a known input space to the low-dimensional latent space, it is called the encoder.

The decoder is the second neural network of this model. It is a function that maps from the latent space to the input space, e.g. as the means of the noise distribution. It is possible to use another neural network that maps to the variance, however this can be omitted for simplicity. In such a case, the variance can be optimized with gradient descent.

To optimize this model, one needs to know two terms: the "reconstruction error", and the Kullback–Leibler divergence (KL-D). Both terms are derived from the free energy expression of the probabilistic model, and therefore differ depending on the noise distribution and the assumed prior of the data, here referred to as p-distribution. For example, a standard VAE task such as IMAGENET is typically assumed to have a gaussianly distributed noise; however, tasks such as binarized MNIST require a Bernoulli noise. The KL-D from the free energy expression maximizes the probability mass of the q-distribution that overlaps with the p-distribution, which unfortunately can result in mode-seeking behaviour. The "reconstruction" term is the remainder of the free energy expression, and requires a sampling approximation to compute its expectation value.[8]

More recent approaches replace Kullback–Leibler divergence (KL-D) with various statistical distances, see "Statistical distance VAE variants" below.

Formulation

[edit]

From the point of view of probabilistic modeling, one wants to maximize the likelihood of the data by their chosen parameterized probability distribution . This distribution is usually chosen to be a Gaussian which is parameterized by and respectively, and as a member of the exponential family it is easy to work with as a noise distribution. Simple distributions are easy enough to maximize, however distributions where a prior is assumed over the latents results in intractable integrals. Let us find via marginalizing over .

where represents the joint distribution under of the observable data and its latent representation or encoding . According to the chain rule, the equation can be rewritten as

In the vanilla variational autoencoder, is usually taken to be a finite-dimensional vector of real numbers, and to be a Gaussian distribution. Then is a mixture of Gaussian distributions.

It is now possible to define the set of the relationships between the input data and its latent representation as

  • Prior
  • Likelihood
  • Posterior

Unfortunately, the computation of is expensive and in most cases intractable. To speed up the calculus to make it feasible, it is necessary to introduce a further function to approximate the posterior distribution as

with defined as the set of real values that parametrize . This is sometimes called amortized inference, since by "investing" in finding a good , one can later infer from quickly without doing any integrals.

In this way, the problem is to find a good probabilistic autoencoder, in which the conditional likelihood distribution is computed by the probabilistic decoder, and the approximated posterior distribution is computed by the probabilistic encoder.

Parametrize the encoder as , and the decoder as .

Evidence lower bound (ELBO)

[edit]

Like many deep learning approaches that use gradient-based optimization, VAEs require a differentiable loss function to update the network weights through backpropagation.

For variational autoencoders, the idea is to jointly optimize the generative model parameters to reduce the reconstruction error between the input and the output, and to make as close as possible to . As reconstruction loss, mean squared error and cross entropy are often used.

As distance loss between the two distributions the Kullback–Leibler divergence is a good choice to squeeze under .[8][9]

The distance loss just defined is expanded as

Now define the evidence lower bound (ELBO):Maximizing the ELBOis equivalent to simultaneously maximizing and minimizing . That is, maximizing the log-likelihood of the observed data, and minimizing the divergence of the approximate posterior from the exact posterior .

The form given is not very convenient for maximization, but the following, equivalent form, is:where is implemented as , since that is, up to an additive constant, what yields. That is, we model the distribution of conditional on to be a Gaussian distribution centered on . The distribution of and are often also chosen to be Gaussians as and , with which we obtain by the formula for KL divergence of Gaussians:Here is the dimension of . For a more detailed derivation and more interpretations of ELBO and its maximization, see its main page.

Reparameterization

[edit]
The scheme of the reparameterization trick. The randomness variable is injected into the latent space as external input. In this way, it is possible to backpropagate the gradient without involving stochastic variable during the update.

To efficiently search for the typical method is gradient ascent.

It is straightforward to findHowever, does not allow one to put the inside the expectation, since appears in the probability distribution itself. The reparameterization trick (also known as stochastic backpropagation[10]) bypasses this difficulty.[8][11][12]

The most important example is when is normally distributed, as .

The scheme of a variational autoencoder after the reparameterization trick

This can be reparametrized by letting be a "standard random number generator", and construct as . Here, is obtained by the Cholesky decomposition:Then we haveand so we obtained an unbiased estimator of the gradient, allowing stochastic gradient descent.

Since we reparametrized , we need to find . Let be the probability density function for , then [clarification needed]where is the Jacobian matrix of with respect to . Since , this is

Variations

[edit]

Many variational autoencoders applications and extensions have been used to adapt the architecture to other domains and improve its performance.

-VAE is an implementation with a weighted Kullback–Leibler divergence term to automatically discover and interpret factorised latent representations. With this implementation, it is possible to force manifold disentanglement for values greater than one. This architecture can discover disentangled latent factors without supervision.[13][14]

The conditional VAE (CVAE), inserts label information in the latent space to force a deterministic constrained representation of the learned data.[15]

Some structures directly deal with the quality of the generated samples[16][17] or implement more than one latent space to further improve the representation learning.

Some architectures mix VAE and generative adversarial networks to obtain hybrid models.[18][19][20]

It is not necessary to use gradients to update the encoder. In fact, the encoder is not necessary for the generative model. [21]

Statistical distance VAE variants

[edit]

After the initial work of Diederik P. Kingma and Max Welling,[22] several procedures were proposed to formulate in a more abstract way the operation of the VAE. In these approaches the loss function is composed of two parts :

  • the usual reconstruction error part which seeks to ensure that the encoder-then-decoder mapping is as close to the identity map as possible; the sampling is done at run time from the empirical distribution of objects available (e.g., for MNIST or IMAGENET this will be the empirical probability law of all images in the dataset). This gives the term: .
  • a variational part that ensures that, when the empirical distribution is passed through the encoder , we recover the target distribution, denoted here that is usually taken to be a Multivariate normal distribution. We will denote this pushforward measure which in practice is just the empirical distribution obtained by passing all dataset objects through the encoder . In order to make sure that is close to the target , a Statistical distance is invoked and the term is added to the loss.

We obtain the final formula for the loss:

The statistical distance requires special properties, for instance it has to be posses a formula as expectation because the loss function will need to be optimized by stochastic optimization algorithms. Several distances can be chosen and this gave rise to several flavors of VAEs:

See also

[edit]

References

[edit]
  1. ^ Kingma, Diederik P.; Welling, Max (2025-08-06). "Auto-Encoding Variational Bayes". arXiv:1312.6114 [stat.ML].
  2. ^ Pinheiro Cinelli, Lucas; et al. (2021). "Variational Autoencoder". Variational Methods for Machine Learning with Applications to Deep Networks. Springer. pp. 111–149. doi:10.1007/978-3-030-70679-1_5. ISBN 978-3-030-70681-4. S2CID 240802776.
  3. ^ Dilokthanakul, Nat; Mediano, Pedro A. M.; Garnelo, Marta; Lee, Matthew C. H.; Salimbeni, Hugh; Arulkumaran, Kai; Shanahan, Murray (2025-08-06). "Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders". arXiv:1611.02648 [cs.LG].
  4. ^ Hsu, Wei-Ning; Zhang, Yu; Glass, James (December 2017). "Unsupervised domain adaptation for robust speech recognition via variational autoencoder-based data augmentation". 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). pp. 16–23. arXiv:1707.06265. doi:10.1109/ASRU.2017.8268911. ISBN 978-1-5090-4788-8. S2CID 22681625.
  5. ^ Ehsan Abbasnejad, M.; Dick, Anthony; van den Hengel, Anton (2017). Infinite Variational Autoencoder for Semi-Supervised Learning. pp. 5888–5897.
  6. ^ Xu, Weidi; Sun, Haoze; Deng, Chao; Tan, Ying (2025-08-06). "Variational Autoencoder for Semi-Supervised Text Classification". Proceedings of the AAAI Conference on Artificial Intelligence. 31 (1). doi:10.1609/aaai.v31i1.10966. S2CID 2060721.
  7. ^ Kameoka, Hirokazu; Li, Li; Inoue, Shota; Makino, Shoji (2025-08-06). "Supervised Determined Source Separation with Multichannel Variational Autoencoder". Neural Computation. 31 (9): 1891–1914. doi:10.1162/neco_a_01217. PMID 31335290. S2CID 198168155.
  8. ^ a b c Kingma, Diederik P.; Welling, Max (2025-08-06). "Auto-Encoding Variational Bayes". arXiv:1312.6114 [stat.ML].
  9. ^ "From Autoencoder to Beta-VAE". Lil'Log. 2025-08-06.
  10. ^ Rezende, Danilo Jimenez; Mohamed, Shakir; Wierstra, Daan (2025-08-06). "Stochastic Backpropagation and Approximate Inference in Deep Generative Models". International Conference on Machine Learning. PMLR: 1278–1286. arXiv:1401.4082.
  11. ^ Bengio, Yoshua; Courville, Aaron; Vincent, Pascal (2013). "Representation Learning: A Review and New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (8): 1798–1828. arXiv:1206.5538. doi:10.1109/TPAMI.2013.50. ISSN 1939-3539. PMID 23787338. S2CID 393948.
  12. ^ Kingma, Diederik P.; Rezende, Danilo J.; Mohamed, Shakir; Welling, Max (2025-08-06). "Semi-Supervised Learning with Deep Generative Models". arXiv:1406.5298 [cs.LG].
  13. ^ Higgins, Irina; Matthey, Loic; Pal, Arka; Burgess, Christopher; Glorot, Xavier; Botvinick, Matthew; Mohamed, Shakir; Lerchner, Alexander (2025-08-06). beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. NeurIPS.
  14. ^ Burgess, Christopher P.; Higgins, Irina; Pal, Arka; Matthey, Loic; Watters, Nick; Desjardins, Guillaume; Lerchner, Alexander (2025-08-06). "Understanding disentangling in β-VAE". arXiv:1804.03599 [stat.ML].
  15. ^ Sohn, Kihyuk; Lee, Honglak; Yan, Xinchen (2025-08-06). Learning Structured Output Representation using Deep Conditional Generative Models (PDF). NeurIPS.
  16. ^ Dai, Bin; Wipf, David (2025-08-06). "Diagnosing and Enhancing VAE Models". arXiv:1903.05789 [cs.LG].
  17. ^ Dorta, Garoe; Vicente, Sara; Agapito, Lourdes; Campbell, Neill D. F.; Simpson, Ivor (2025-08-06). "Training VAEs Under Structured Residuals". arXiv:1804.01050 [stat.ML].
  18. ^ Larsen, Anders Boesen Lindbo; S?nderby, S?ren Kaae; Larochelle, Hugo; Winther, Ole (2025-08-06). "Autoencoding beyond pixels using a learned similarity metric". International Conference on Machine Learning. PMLR: 1558–1566. arXiv:1512.09300.
  19. ^ Bao, Jianmin; Chen, Dong; Wen, Fang; Li, Houqiang; Hua, Gang (2017). "CVAE-GAN: Fine-Grained Image Generation Through Asymmetric Training". pp. 2745–2754. arXiv:1703.10155 [cs.CV].
  20. ^ Gao, Rui; Hou, Xingsong; Qin, Jie; Chen, Jiaxin; Liu, Li; Zhu, Fan; Zhang, Zhao; Shao, Ling (2020). "Zero-VAE-GAN: Generating Unseen Features for Generalized and Transductive Zero-Shot Learning". IEEE Transactions on Image Processing. 29: 3665–3680. Bibcode:2020ITIP...29.3665G. doi:10.1109/TIP.2020.2964429. ISSN 1941-0042. PMID 31940538. S2CID 210334032.
  21. ^ Drefs, J.; Guiraud, E.; Panagiotou, F.; Lücke, J. (2023). "Direct evolutionary optimization of variational autoencoders with binary latents". Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science. Vol. 13715. Springer Nature Switzerland. pp. 357–372. doi:10.1007/978-3-031-26409-2_22. ISBN 978-3-031-26408-5.
  22. ^ Kingma, Diederik P.; Welling, Max (2025-08-06). "Auto-Encoding Variational Bayes". arXiv:1312.6114 [stat.ML].
  23. ^ Kolouri, Soheil; Pope, Phillip E.; Martin, Charles E.; Rohde, Gustavo K. (2019). "Sliced Wasserstein Auto-Encoders". International Conference on Learning Representations. International Conference on Learning Representations. ICPR.
  24. ^ Turinici, Gabriel (2021). "Radon-Sobolev Variational Auto-Encoders". Neural Networks. 141: 294–305. arXiv:1911.13135. doi:10.1016/j.neunet.2021.04.018. ISSN 0893-6080. PMID 33933889.
  25. ^ Gretton, A.; Li, Y.; Swersky, K.; Zemel, R.; Turner, R. (2017). "A Polya Contagion Model for Networks". IEEE Transactions on Control of Network Systems. 5 (4): 1998–2010. arXiv:1705.02239. doi:10.1109/TCNS.2017.2781467.
  26. ^ Tolstikhin, I.; Bousquet, O.; Gelly, S.; Sch?lkopf, B. (2018). "Wasserstein Auto-Encoders". arXiv:1711.01558 [stat.ML].
  27. ^ Louizos, C.; Shi, X.; Swersky, K.; Li, Y.; Welling, M. (2019). "Kernelized Variational Autoencoders". arXiv:1901.02401 [astro-ph.CO].

Further reading

[edit]
prada是什么档次 阁五行属什么 维生素a是什么 为什么下巴经常长痘痘 穷书生是什么生肖
敏感肌是什么意思 十一点半是什么时辰 男人蛋皮痒用什么药 黄鼠狼怕什么 22点是什么时辰
什么坚果适合减肥吃 2r是什么意思 梦见火车脱轨什么预兆 超声诊断科是做什么的 70岁是什么之年
放屁不臭是什么原因 乌鸡汤放什么补气补血 什么干什么燥 扁桃体作用是什么 5月21日是什么星座
人脉是什么意思luyiluode.com 道观是什么意思hcv9jop2ns5r.cn 全身骨头疼是什么原因hcv8jop5ns6r.cn 才字五行属什么hcv8jop2ns6r.cn 总胆红素高是怎么回事有什么危害hcv9jop2ns6r.cn
什么叫专业hcv9jop0ns1r.cn 一吃东西就牙疼是什么原因引起的hcv9jop0ns5r.cn 脾大是怎么回事有什么危害helloaicloud.com 她将是你的新娘是什么歌hcv8jop1ns2r.cn 咽喉痛吃什么药好得快hcv9jop6ns3r.cn
鸡头上长痘痘用什么药hcv8jop8ns4r.cn chest是什么意思hcv7jop6ns3r.cn 味极鲜是什么hcv8jop1ns1r.cn 甲沟炎是什么样子的hcv9jop6ns0r.cn 左眼跳财是什么意思hcv7jop5ns1r.cn
旅行的意义是什么clwhiglsz.com 情愫什么意思hcv9jop4ns1r.cn 共襄盛举是什么意思hcv8jop6ns0r.cn 心颤吃什么药效果好hcv9jop5ns5r.cn 梦见自己死了预示什么hcv7jop7ns2r.cn
百度