心衰为什么会引起水肿| 水痘通过什么途径传染| 胃镜预约挂什么科| 老年人喝什么牛奶好| 孔子是什么家| 下焦湿热吃什么药| ddp是什么化疗药| 身上为什么会长小肉球| 明年什么生肖| 男人吃香菜有什么好处| 95年的猪是什么命| 布偶猫长什么样| 我靠是什么意思| 胎盘位于子宫前壁是什么意思| 为什么痣上面会长毛| 湿气重喝什么| 亲家是什么意思| 什么忙什么乱| 蔻驰香水属于什么档次| 粽子是什么意思| 桉是什么意思| 挂帅是什么意思| 什么样的毛刺是良性的| 心累是什么原因| 晚上吃什么能减肥| 10月28号是什么星座| 说什么情深似海我却不敢当| 死党是什么意思| 大熊猫属于什么科| 拉肚子吃什么蔬菜| 火星是什么意思| special是什么意思| 眼睑痉挛是什么原因造成的| 血管堵塞用什么药| 氧化氢是什么| 尿路感染吃什么药| 邪淫是什么意思| 备孕喝苏打水什么作用| 眼有眼屎是什么原因| 肺栓塞的主要症状是什么| 儿童感冒流鼻涕吃什么药好得快| 好女人的标准是什么| 属虎的五行属什么| 补血吃什么| 自欺欺人什么意思| 祖师爷是什么意思| 什么什么美景| 扒皮鱼是什么鱼| 犹太人是什么意思| 5月9号什么星座| 屁股生疮是什么原因| 车前草的作用是什么| 永无止境是什么意思| 手被辣椒辣了用什么方法解辣| 水肿是什么症状| 想吐吃什么药可以缓解| 天庭是什么意思| 什么果酒最好喝| 鹿晗是什么星座| 分泌性中耳炎吃什么药| 妇科湿疹用什么药膏最有效| 连坐是什么意思| 鼻孔里面痒是什么原因| 生理期能吃什么水果| 小学生的学籍号是什么| 韩五行属什么的| 身上为什么会起湿疹| 俄罗斯是什么国家| 为什么会有肥胖纹| 晚上剪指甲有什么禁忌| 下焦湿热吃什么中成药| 肝炎吃什么药| 甲状旁腺分泌什么激素| 9.11是什么星座| 丙酮是什么| 阴毛变白什么原因| 查怀孕挂什么科| 休闲裤配什么鞋子好看| 吗啡是什么| 沙门氏菌用什么药最好| 叒字什么意思| 青海有什么湖| 酸化是什么意思| 囟门是什么意思| 祛湿气喝什么茶| 百合与什么搭配最好| 老人大便失禁是什么原因| 抗链球菌溶血素o偏高是什么原因| 人体缺钠会出现什么症状| 健脾胃吃什么药| 改朝换代是什么意思| 心驰神往是什么意思| 桃花眼是什么意思| 苏打水什么味道| 什么是性侵| 安全是什么| 羽下面隹什么字| 胆囊炎要吃什么药| 素颜霜是干什么用的| 疖肿吃什么药| au585是什么金| 艾滋病会有什么症状| 空气炸锅可以做什么| 吃生蚝有什么好处和坏处| 扫描件是什么意思| 单抗主要治疗什么| 喝酒胃疼吃什么药| 心电图pr是什么意思| 5月28日是什么星座| 羟基丁酸在淘宝叫什么| 以纯属于什么档次| 大惊小怪是什么生肖| 接亲是什么意思| 1975年属兔的是什么命| 蘸什么意思| 散粉和粉饼有什么区别| 刘嘉玲什么星座| 脑疝是什么意思| 龙跟什么生肖配对最好| 萝卜干炒什么好吃| 类风湿和风湿有什么区别| 价值是什么| 免疫力下降吃什么好| 对比度是什么意思| 系统b超主要检查什么| 红花泡水喝有什么功效| 部署是什么意思| 麸皮是什么东西| 一什么眼镜| 老舍为什么自杀| 月经突然提前一周是什么原因| 血糖高适合吃什么水果| 隆胸有什么危害和后遗症吗| 4.12是什么星座| 赤者念什么| 白玉兰奖是什么级别的| 翻毛皮是什么材质| 百合花什么时候开花| 什么是妈妈臀| 孩子不好好吃饭是什么原因| 香蕉为什么是弯的| 水是什么生肖| 三角形为什么具有稳定性| 传媒公司是干什么的| 喉咙痛可以吃什么| 老人脚浮肿是什么原因引起的| 胃嗳气是什么原因| 流浓黄鼻涕是什么原因| 12.8是什么星座| 肝火旺吃什么降火最快| 喝完酒头疼是什么原因| 阴性和阳性是什么意思| 未来的未多一横念什么| 抽血后头晕是什么原因| 睡觉爱做梦是什么原因| 腰痛应该挂什么科| 数词是什么意思| 太阳指什么生肖| 芳菲是什么意思| 面包虫长大后变成什么| 大材小用是什么生肖| 牙龈萎缩是什么原因造成的| 流产会出现什么症状| 原研药是什么意思| 什么是指标到校| 八六年属什么| 表哥的女儿叫什么| 双肺索条灶是什么意思| 真菌感染用什么药| 非甾体抗炎药是什么意思| 胸痛吃什么药| 唐氏综合症是什么意思| 惹上官司是犯了什么煞| 墓志铭是什么意思| 凝固酶阳性是什么意思| 为什么指甲会凹凸不平| 什么是横纹肌溶解症| nt和无创有什么区别| 小孩上户口需要什么材料| 吃什么水果能变白| 一什么面包| 鹅吃什么草| 为什么喉咙痛| 头顶头发稀少是什么原因| 想吃辣的是什么原因| 桃对什么| 药流后吃什么药| pn是什么| 胃寒吃什么药最有效| 无名指和食指一样长代表什么| 细胞由什么组成| 福州有什么好吃的| 甘甜的什么| 液基细胞学检查是什么| 什么样的人着床晚| 宫颈病变是什么| 保健品是什么意思| 孜然是什么| 血糖高适合吃什么水果| 过敏性鼻炎用什么药效果好| 七月三十是什么星座| 18k是什么金| 脚酸疼是什么原因引起的吗| 百年好合是什么意思| 义是什么意思| 化学性肝损伤是什么意思| 含是什么意思| 我是小姨的什么人| 神经内科和神经外科有什么区别| 黄桃什么时候成熟| 丰衣足食是什么生肖| 每天坚持做俯卧撑有什么好处| 内痔疮吃什么药最好| 保险公司最怕什么投诉| 细菌感染发烧吃什么药| 冬瓜什么时候成熟| 走路脚心疼是什么原因| 伤口不容易愈合是什么原因| 物以类聚人以群分什么意思| 不宁腿是什么症状| 更年期皮肤瘙痒是什么原因| 藕什么季节成熟| 男性射精是什么感觉| zero什么意思| 吃什么能降尿蛋白| 血糖高吃什么水果降糖| 劲酒兑什么饮料好喝| 家里进蝙蝠什么预兆| spa是什么服务| 乙醇是什么东西| 儿童拉肚子吃什么药| 膺是什么意思| 齿痕舌吃什么药| 恢复伤口吃什么好得快| 亚甲炎是什么病| 毁谤是什么意思| 教科书是什么意思| 乳腺增生看什么科室| 观音坐莲是什么姿势| 受益匪浅的意思是什么| 免疫五项检查是什么| 桃李满天下的桃李是什么意思| 身上长红点很痒是什么原因| 嘴里发咸是什么原因| 七夕什么时候| 雨打棺材是什么征兆| 颈动脉斑块吃什么药效果最好| 摇滚是什么意思| 初恋什么意思| 东北小咬是什么虫子| 肾结石吃什么水果好| 染色体xy代表什么| 腿总是抽筋是什么原因| 心悸吃什么中成药| 斥巨资是什么意思| b型钠尿肽高说明什么| 黑糖和红糖有什么区别| 喉咙疼是什么原因| 性是什么| 杂交金毛犬长什么样子| 施华洛世奇水晶是什么材质| 杜甫自号什么| 喜乐是什么意思| 一去不返是什么生肖| 百度Jump to content

[网连中国]七大方言地调查:您的孩子还会说家乡话吗?

From Wikipedia, the free encyclopedia
百度 有分析称,坠毁航班可能为节省燃料而抄近路飞行,不幸被击落。

Unsupervised learning is a framework in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data.[1] Other frameworks in the spectrum of supervisions include weak- or semi-supervision, where a small portion of the data is tagged, and self-supervision. Some researchers consider self-supervised learning a form of unsupervised learning.[2]

Conceptually, unsupervised learning divides into the aspects of data, training, algorithm, and downstream applications. Typically, the dataset is harvested cheaply "in the wild", such as massive text corpus obtained by web crawling, with only minor filtering (such as Common Crawl). This compares favorably to supervised learning, where the dataset (such as the ImageNet1000) is typically constructed manually, which is much more expensive.

There were algorithms designed specifically for unsupervised learning, such as clustering algorithms like k-means, dimensionality reduction techniques like principal component analysis (PCA), Boltzmann machine learning, and autoencoders. After the rise of deep learning, most large-scale unsupervised learning have been done by training general-purpose neural network architectures by gradient descent, adapted to performing unsupervised learning by designing an appropriate training procedure.

Sometimes a trained model can be used as-is, but more often they are modified for downstream applications. For example, the generative pretraining method trains a model to generate a textual dataset, before finetuning it for other applications, such as text classification.[3][4] As another example, autoencoders are trained to good features, which can then be used as a module for other models, such as in a latent diffusion model.

Tasks

[edit]
Tendency for a task to employ supervised vs. unsupervised methods. Task names straddling circle boundaries is intentional. It shows that the classical division of imaginative tasks (left) employing unsupervised methods is blurred in today's learning schemes.

Tasks are often categorized as discriminative (recognition) or generative (imagination). Often but not always, discriminative tasks use supervised methods and generative tasks use unsupervised (see Venn diagram); however, the separation is very hazy. For example, object recognition favors supervised learning but unsupervised learning can also cluster objects into groups. Furthermore, as progress marches onward, some tasks employ both methods, and some tasks swing from one to another. For example, image recognition started off as heavily supervised, but became hybrid by employing unsupervised pre-training, and then moved towards supervision again with the advent of dropout, ReLU, and adaptive learning rates.

A typical generative task is as follows. At each step, a datapoint is sampled from the dataset, and part of the data is removed, and the model must infer the removed part. This is particularly clear for the denoising autoencoders and BERT.

Neural network architectures

[edit]

Training

[edit]

During the learning phase, an unsupervised network tries to mimic the data it's given and uses the error in its mimicked output to correct itself (i.e. correct its weights and biases). Sometimes the error is expressed as a low probability that the erroneous output occurs, or it might be expressed as an unstable high energy state in the network.

In contrast to supervised methods' dominant use of backpropagation, unsupervised learning also employs other methods including: Hopfield learning rule, Boltzmann learning rule, Contrastive Divergence, Wake Sleep, Variational Inference, Maximum Likelihood, Maximum A Posteriori, Gibbs Sampling, and backpropagating reconstruction errors or hidden state reparameterizations. See the table below for more details.

Energy

[edit]

An energy function is a macroscopic measure of a network's activation state. In Boltzmann machines, it plays the role of the Cost function. This analogy with physics is inspired by Ludwig Boltzmann's analysis of a gas' macroscopic energy from the microscopic probabilities of particle motion , where k is the Boltzmann constant and T is temperature. In the RBM network the relation is ,[5] where and vary over every possible activation pattern and . To be more precise, , where is an activation pattern of all neurons (visible and hidden). Hence, some early neural networks bear the name Boltzmann Machine. Paul Smolensky calls the Harmony. A network seeks low energy which is high Harmony.

Networks

[edit]

This table shows connection diagrams of various unsupervised networks, the details of which will be given in the section Comparison of Networks. Circles are neurons and edges between them are connection weights. As network design changes, features are added on to enable new capabilities or removed to make learning faster. For instance, neurons change between deterministic (Hopfield) and stochastic (Boltzmann) to allow robust output, weights are removed within a layer (RBM) to hasten learning, or connections are allowed to become asymmetric (Helmholtz).

Hopfield Boltzmann RBM Stacked Boltzmann
A network based on magnetic domains in iron with a single self-connected layer. It can be used as a content addressable memory.
Network is separated into 2 layers (hidden vs. visible), but still using symmetric 2-way weights. Following Boltzmann's thermodynamics, individual probabilities give rise to macroscopic energies.
Restricted Boltzmann Machine. This is a Boltzmann machine where lateral connections within a layer are prohibited to make analysis tractable.
This network has multiple RBM's to encode a hierarchy of hidden features. After a single RBM is trained, another blue hidden layer (see left RBM) is added, and the top 2 layers are trained as a red & blue RBM. Thus the middle layers of an RBM acts as hidden or visible, depending on the training phase it is in.
Helmholtz Autoencoder VAE
Instead of the bidirectional symmetric connection of the stacked Boltzmann machines, we have separate one-way connections to form a loop. It does both generation and discrimination.
A feed forward network that aims to find a good middle layer representation of its input world. This network is deterministic, so it is not as robust as its successor the VAE.
Applies Variational Inference to the Autoencoder. The middle layer is a set of means & variances for Gaussian distributions. The stochastic nature allows for more robust imagination than the deterministic autoencoder.

Of the networks bearing people's names, only Hopfield worked directly with neural networks. Boltzmann and Helmholtz came before artificial neural networks, but their work in physics and physiology inspired the analytical methods that were used.

History

[edit]
1974 Ising magnetic model proposed by WA Little [de] for cognition
1980 Kunihiko Fukushima introduces the neocognitron, which is later called a convolutional neural network. It is mostly used in SL, but deserves a mention here.
1982 Ising variant Hopfield net described as CAMs and classifiers by John Hopfield.
1983 Ising variant Boltzmann machine with probabilistic neurons described by Hinton & Sejnowski following Sherington & Kirkpatrick's 1975 work.
1986 Paul Smolensky publishes Harmony Theory, which is an RBM with practically the same Boltzmann energy function. Smolensky did not give a practical training scheme. Hinton did in mid-2000s.
1995 Schmidthuber introduces the LSTM neuron for languages.
1995 Dayan & Hinton introduces Helmholtz machine
2013 Kingma, Rezende, & co. introduced Variational Autoencoders as Bayesian graphical probability network, with neural nets as components.

Specific Networks

[edit]

Here, we highlight some characteristics of select networks. The details of each are given in the comparison table below.

Hopfield Network
Ferromagnetism inspired Hopfield networks. A neuron correspond to an iron domain with binary magnetic moments Up and Down, and neural connections correspond to the domain's influence on each other. Symmetric connections enable a global energy formulation. During inference the network updates each state using the standard activation step function. Symmetric weights and the right energy functions guarantees convergence to a stable activation pattern. Asymmetric weights are difficult to analyze. Hopfield nets are used as Content Addressable Memories (CAM).
Boltzmann Machine
These are stochastic Hopfield nets. Their state value is sampled from this pdf as follows: suppose a binary neuron fires with the Bernoulli probability p(1) = 1/3 and rests with p(0) = 2/3. One samples from it by taking a uniformly distributed random number y, and plugging it into the inverted cumulative distribution function, which in this case is the step function thresholded at 2/3. The inverse function = { 0 if x <= 2/3, 1 if x > 2/3 }.
Sigmoid Belief Net
Introduced by Radford Neal in 1992, this network applies ideas from probabilistic graphical models to neural networks. A key difference is that nodes in graphical models have pre-assigned meanings, whereas Belief Net neurons' features are determined after training. The network is a sparsely connected directed acyclic graph composed of binary stochastic neurons. The learning rule comes from Maximum Likelihood on p(X): Δwij sj * (si - pi), where pi = 1 / ( 1 + eweighted inputs into neuron i ). sj's are activations from an unbiased sample of the posterior distribution and this is problematic due to the Explaining Away problem raised by Judea Perl. Variational Bayesian methods uses a surrogate posterior and blatantly disregard this complexity.
Deep Belief Network
Introduced by Hinton, this network is a hybrid of RBM and Sigmoid Belief Network. The top 2 layers is an RBM and the second layer downwards form a sigmoid belief network. One trains it by the stacked RBM method and then throw away the recognition weights below the top RBM. As of 2009, 3-4 layers seems to be the optimal depth.[6]
Helmholtz machine
These are early inspirations for the Variational Auto Encoders. Its 2 networks combined into one—forward weights operates recognition and backward weights implements imagination. It is perhaps the first network to do both. Helmholtz did not work in machine learning but he inspired the view of "statistical inference engine whose function is to infer probable causes of sensory input".[7] the stochastic binary neuron outputs a probability that its state is 0 or 1. The data input is normally not considered a layer, but in the Helmholtz machine generation mode, the data layer receives input from the middle layer and has separate weights for this purpose, so it is considered a layer. Hence this network has 3 layers.
Variational autoencoder
These are inspired by Helmholtz machines and combines probability network with neural networks. An Autoencoder is a 3-layer CAM network, where the middle layer is supposed to be some internal representation of input patterns. The encoder neural network is a probability distribution qφ(z given x) and the decoder network is pθ(x given z). The weights are named phi & theta rather than W and V as in Helmholtz—a cosmetic difference. These 2 networks here can be fully connected, or use another NN scheme.

Comparison of networks

[edit]
Hopfield Boltzmann RBM Stacked RBM Helmholtz Autoencoder VAE
Usage & notables CAM, traveling salesman problem CAM. The freedom of connections makes this network difficult to analyze. pattern recognition. used in MNIST digits and speech. recognition & imagination. trained with unsupervised pre-training and/or supervised fine tuning. imagination, mimicry language: creative writing, translation. vision: enhancing blurry images generate realistic data
Neuron deterministic binary state. Activation = { 0 (or -1) if x is negative, 1 otherwise } stochastic binary Hopfield neuron ← same. (extended to real-valued in mid 2000s) ← same ← same language: LSTM. vision: local receptive fields. usually real valued relu activation. middle layer neurons encode means & variances for Gaussians. In run mode (inference), the output of the middle layer are sampled values from the Gaussians.
Connections 1-layer with symmetric weights. No self-connections. 2-layers. 1-hidden & 1-visible. symmetric weights. ← same.
no lateral connections within a layer.
top layer is undirected, symmetric. other layers are 2-way, asymmetric. 3-layers: asymmetric weights. 2 networks combined into 1. 3-layers. The input is considered a layer even though it has no inbound weights. recurrent layers for NLP. feedforward convolutions for vision. input & output have the same neuron counts. 3-layers: input, encoder, distribution sampler decoder. the sampler is not considered a layer
Inference & energy Energy is given by Gibbs probability measure : ← same ← same minimize KL divergence inference is only feed-forward. previous UL networks ran forwards AND backwards minimize error = reconstruction error - KLD
Training Δwij = si*sj, for +1/-1 neuron Δwij = e*(pij - p'ij). This is derived from minimizing KLD. e = learning rate, p' = predicted and p = actual distribution. Δwij = e*( < vi hj >data - < vi hj >equilibrium ). This is a form of contrastive divergence w/ Gibbs Sampling. "<>" are expectations. ← similar. train 1-layer at a time. approximate equilibrium state with a 3-segment pass. no back propagation. wake-sleep 2 phase training back propagate the reconstruction error reparameterize hidden state for backprop
Strength resembles physical systems so it inherits their equations ← same. hidden neurons act as internal representatation of the external world faster more practical training scheme than Boltzmann machines trains quickly. gives hierarchical layer of features mildly anatomical. analyzable w/ information theory & statistical mechanics
Weakness hard to train due to lateral connections equilibrium requires too many iterations integer & real-valued neurons are more complicated.

Hebbian Learning, ART, SOM

[edit]

The classical example of unsupervised learning in the study of neural networks is Donald Hebb's principle, that is, neurons that fire together wire together.[8] In Hebbian learning, the connection is reinforced irrespective of an error, but is exclusively a function of the coincidence between action potentials between the two neurons.[9] A similar version that modifies synaptic weights takes into account the time between the action potentials (spike-timing-dependent plasticity or STDP). Hebbian Learning has been hypothesized to underlie a range of cognitive functions, such as pattern recognition and experiential learning.

Among neural network models, the self-organizing map (SOM) and adaptive resonance theory (ART) are commonly used in unsupervised learning algorithms. The SOM is a topographic organization in which nearby locations in the map represent inputs with similar properties. The ART model allows the number of clusters to vary with problem size and lets the user control the degree of similarity between members of the same clusters by means of a user-defined constant called the vigilance parameter. ART networks are used for many pattern recognition tasks, such as automatic target recognition and seismic signal processing.[10]

Probabilistic methods

[edit]

Two of the main methods used in unsupervised learning are principal component and cluster analysis. Cluster analysis is used in unsupervised learning to group, or segment, datasets with shared attributes in order to extrapolate algorithmic relationships.[11] Cluster analysis is a branch of machine learning that groups the data that has not been labelled, classified or categorized. Instead of responding to feedback, cluster analysis identifies commonalities in the data and reacts based on the presence or absence of such commonalities in each new piece of data. This approach helps detect anomalous data points that do not fit into either group.

A central application of unsupervised learning is in the field of density estimation in statistics,[12] though unsupervised learning encompasses many other domains involving summarizing and explaining data features. It can be contrasted with supervised learning by saying that whereas supervised learning intends to infer a conditional probability distribution conditioned on the label of input data; unsupervised learning intends to infer an a priori probability distribution .

Approaches

[edit]

Some of the most common algorithms used in unsupervised learning include: (1) Clustering, (2) Anomaly detection, (3) Approaches for learning latent variable models. Each approach uses several methods as follows:

Method of moments

[edit]

One of the statistical approaches for unsupervised learning is the method of moments. In the method of moments, the unknown parameters (of interest) in the model are related to the moments of one or more random variables, and thus, these unknown parameters can be estimated given the moments. The moments are usually estimated from samples empirically. The basic moments are first and second order moments. For a random vector, the first order moment is the mean vector, and the second order moment is the covariance matrix (when the mean is zero). Higher order moments are usually represented using tensors which are the generalization of matrices to higher orders as multi-dimensional arrays.

In particular, the method of moments is shown to be effective in learning the parameters of latent variable models. Latent variable models are statistical models where in addition to the observed variables, a set of latent variables also exists which is not observed. A highly practical example of latent variable models in machine learning is the topic modeling which is a statistical model for generating the words (observed variables) in the document based on the topic (latent variable) of the document. In the topic modeling, the words in the document are generated according to different statistical parameters when the topic of the document is changed. It is shown that method of moments (tensor decomposition techniques) consistently recover the parameters of a large class of latent variable models under some assumptions.[15]

The Expectation–maximization algorithm (EM) is also one of the most practical methods for learning latent variable models. However, it can get stuck in local optima, and it is not guaranteed that the algorithm will converge to the true unknown parameters of the model. In contrast, for the method of moments, the global convergence is guaranteed under some conditions.

See also

[edit]

References

[edit]
  1. ^ Wu, Wei. "Unsupervised Learning" (PDF). Archived (PDF) from the original on 14 April 2024. Retrieved 26 April 2024.
  2. ^ Liu, Xiao; Zhang, Fanjin; Hou, Zhenyu; Mian, Li; Wang, Zhaoyu; Zhang, Jing; Tang, Jie (2021). "Self-supervised Learning: Generative or Contrastive". IEEE Transactions on Knowledge and Data Engineering: 1. arXiv:2006.08218. doi:10.1109/TKDE.2021.3090866. ISSN 1041-4347.
  3. ^ Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on 26 January 2021. Retrieved 23 January 2021.
  4. ^ Li, Zhuohan; Wallace, Eric; Shen, Sheng; Lin, Kevin; Keutzer, Kurt; Klein, Dan; Gonzalez, Joey (2025-08-05). "Train Big, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers". Proceedings of the 37th International Conference on Machine Learning. PMLR: 5958–5968.
  5. ^ Hinton, G. (2012). "A Practical Guide to Training Restricted Boltzmann Machines" (PDF). Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science. Vol. 7700. Springer. pp. 599–619. doi:10.1007/978-3-642-35289-8_32. ISBN 978-3-642-35289-8. Archived (PDF) from the original on 2025-08-05. Retrieved 2025-08-05.
  6. ^ "Deep Belief Nets" (video). September 2009. Archived from the original on 2025-08-05. Retrieved 2025-08-05.
  7. ^ Peter, Dayan; Hinton, Geoffrey E.; Neal, Radford M.; Zemel, Richard S. (1995). "The Helmholtz machine". Neural Computation. 7 (5): 889–904. doi:10.1162/neco.1995.7.5.889. hdl:21.11116/0000-0002-D6D3-E. PMID 7584891. S2CID 1890561. Closed access icon
  8. ^ Buhmann, J.; Kuhnel, H. (1992). "Unsupervised and supervised data clustering with competitive neural networks". [Proceedings 1992] IJCNN International Joint Conference on Neural Networks. Vol. 4. IEEE. pp. 796–801. doi:10.1109/ijcnn.1992.227220. ISBN 0780305590. S2CID 62651220.
  9. ^ Comesa?a-Campos, Alberto; Bouza-Rodríguez, José Benito (June 2016). "An application of Hebbian learning in the design process decision-making". Journal of Intelligent Manufacturing. 27 (3): 487–506. doi:10.1007/s10845-014-0881-z. ISSN 0956-5515. S2CID 207171436.
  10. ^ Carpenter, G.A. & Grossberg, S. (1988). "The ART of adaptive pattern recognition by a self-organizing neural network" (PDF). Computer. 21 (3): 77–88. doi:10.1109/2.33. S2CID 14625094. Archived from the original (PDF) on 2025-08-05. Retrieved 2025-08-05.
  11. ^ Roman, Victor (2025-08-05). "Unsupervised Machine Learning: Clustering Analysis". Medium. Archived from the original on 2025-08-05. Retrieved 2025-08-05.
  12. ^ Jordan, Michael I.; Bishop, Christopher M. (2004). "7. Intelligent Systems §Neural Networks". In Tucker, Allen B. (ed.). Computer Science Handbook (2nd ed.). Chapman & Hall/CRC Press. doi:10.1201/9780203494455. ISBN 1-58488-360-X. Archived from the original on 2025-08-05. Retrieved 2025-08-05.
  13. ^ Hastie, Tibshirani & Friedman 2009, pp. 485–586
  14. ^ Garbade, Dr Michael J. (2025-08-05). "Understanding K-means Clustering in Machine Learning". Medium. Archived from the original on 2025-08-05. Retrieved 2025-08-05.
  15. ^ Anandkumar, Animashree; Ge, Rong; Hsu, Daniel; Kakade, Sham; Telgarsky, Matus (2014). "Tensor Decompositions for Learning Latent Variable Models" (PDF). Journal of Machine Learning Research. 15: 2773–2832. arXiv:1210.7559. Bibcode:2012arXiv1210.7559A. Archived (PDF) from the original on 2025-08-05. Retrieved 2025-08-05.

Further reading

[edit]
喘气费劲是什么原因 白果治什么病 胆囊炎吃什么好 痛经是什么原因引起的 羊奶不能和什么一起吃
属虎的五行属什么 尹什么意思 什么原因会怀上葡萄胎 体质指数是什么意思 笑气是什么东西
生发吃什么食物好 lively是什么意思 用什么泡脚可以脸上祛斑 牙齿涂氟是什么意思 蔗去掉草字头读什么
蓝莓什么味道 疥疮用什么药膏好得快 昂热为什么认识路鸣泽 鼻头发黑是什么原因 啤酒喝了有什么好处
感叹是什么意思baiqunet.com 鹿角粉有什么功效和作用hcv7jop6ns6r.cn 转氨酶高挂什么科520myf.com penis是什么意思hcv9jop4ns4r.cn 夏侯霸为什么投降蜀国hcv9jop7ns2r.cn
冠状动脉肌桥是什么病hcv9jop6ns0r.cn 单车是什么意思hcv8jop1ns9r.cn 尿道口有灼热感是什么原因hcv9jop3ns0r.cn 大便隐血阴性是什么意思hcv8jop1ns6r.cn 风言风语是什么意思liaochangning.com
高考推迟月经吃什么药hcv8jop1ns3r.cn 恩师是什么意思hcv9jop6ns7r.cn 芒果可以做什么美食hcv8jop3ns2r.cn 秋葵有什么营养hcv8jop4ns1r.cn 银耳不能和什么一起吃hcv9jop1ns4r.cn
五月七号是什么星座96micro.com 娃娃鱼是什么动物hcv8jop3ns3r.cn 执子之手与子偕老是什么意思hcv7jop9ns8r.cn 腹膜炎吃什么药hcv8jop3ns3r.cn 四月初一是什么星座96micro.com
百度