We select 10 images from Fingerprint
database of Olympic Competition in
Information Technology (Fig 2-a), 10 training
images from Google (Fig 2-b), 20 training
images from coin database of USA (Fig 2-c),
20 images from Google (Fig 2-d), 52 training
images from UJIpenchars Database (Fig 2-e).
For each experiment, 10 noisy images are
made from each training image by deleting
some pixels in a random way.
The ability of recalling of FBAM is compared
to other BAMs with multiple training
strategy. BAMs are implemented, namely,
BAM of Tao Wang (TBAM) [4] BAM of
Xinhua Zhuang (XBAM) [5], BAM of
Y.F.Wang (WBAM) [6], and FBAM. The
ability of recalling of BAMs is determined by
percentages of pixels which are correctly
recalled. Table 1 shows the percentages of
pixels recalling successfully of BAMs. Data
from Table 1 show that FBAM is the best
model in all experiments.
In conclusion, we conduct five experiments
with different image sets. Results experiments
show that FBAM recalls better than other
BAM in auto-association mode. Moreover,
the ability of recalling of FBAM significantly
increases when content of training patterns
are greatly different.
CONCLUSION
In this paper, we proposed an improved
learning algorithm for BAMs. Our learning
algorithm learns patterns more flexibly.
Weights of associations are updated flexibly
in a few iterations based on changing of
MNTP. Moreover, FBAM recalled effectively
for non-orthogonal patterns. We conduct
experiments in pattern recognition
applications to prove the effectiveness of
FBAM. Results of experiments show that
FBAM recalls better than other BAMs in
auto-association mode.
FBAM recall better when content of patterns
are significantly different. Therefore, we will
investigate to develop this advantage for
FBAM in the future.
Acknowledgements. This work was supported
by Vietnam’s National Foundation for Science
and Technology Development (NAFOSTED)
under Granted Number 102.02-2011.13.
5 trang |
Chia sẻ: thucuc2301 | Lượt xem: 462 | Lượt tải: 0
Bạn đang xem nội dung tài liệu An improved learning algorithm of bam - Nong Thi Hoa, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
Nông Thị Hoa và Đtg Tạp chí KHOA HỌC & CÔNG NGHỆ 113(13): 61 - 65
61
AN IMPROVED LEARNING ALGORITHM OF BAM
Nong Thi Hoa1,*, Bui The Duy2
1College of Information Technology and Communication – TNU
2Human Machine Interaction Laboratory – Vietnam National University, Hanoi
SUMMARY
Artificial neural networks, characterized by massive parallelism, robustness, and learning capacity,
have many applications in various fields. Bidirectional Associative Memory (BAM) is a neural
network that is extended from Hopfield networks to make a two-way associative search for a
pattern pair. The most important advantage of BAM is recalling stored patterns from noisy inputs.
Learning process of previous BAMs, however, is not flexible. Moreover, orthogonal patterns are
recalled better than other patterns. It means that, some important patterns cannot be recalled. In
this paper, we propose a learning algorithm of BAM, which learns from training data more flexibly
as well as improves the ability of recall for non-orthogonal patterns. In our learning algorithm,
associations of patterns are updated flexibly in a few iterations by modifying parameters after each
iteration. Moreover, the proposed learning algorithm assures the recalling of all patterns is similar,
which is presented by the stop condition of the learning process. We have conduct experiments
with five datasets to prove the effectiveness of BAM with the proposed learning algorithm (FBAM
- Flexible BAM). Results from experiments show that FBAM recalls better than other BAMs in
auto-association mode.
Keywords: Bidirectional Associative Memory, Associative Memory, Learning Algorithm, Noise
Tolerance, Pattern Recognition.
INTRODUCTION*
Artificial neural networks, characterized by
massive parallelism, robustness, and learning
capability, effectively solve many problems
such as pattern recognition, designing
controller, clustering data. BAM [1] is
designed from two Hopfield neural networks
to show a two-way associative search of
pattern pairs. An important advantage of
BAM is recalling stored patterns from noisy
or partial inputs. Moreover, BAM possesses
two attributes overcome other neural
networks. First, BAM is stable without
condition. Second, BAM converges to a
stable state in a synchronous mode.
Therefore, it is easy to apply BAM for real
applications.
Studies on models of BAM can be divided
into two categories: BAMs without iterative
learning and BAMs with iterative learning
(BAMs with multiple training strategy).
BAMs with iterative learning recall more
*
Tel: 01238492484
effectively than BAMs without iterative
learning. The iterative learning of BAMs is
shown into two types. The first type is using
the minimum number of times for training
pairs of patterns (MNTP). BAMs [2, 3, 4]
showed multiple training strategy which
assured orthogonal patterns were recalled
perfectly. However, the learning process is
not flexible because MNTP is fixed. The
second type is learning pairs of patterns in
many iterations. BAMs learned pairs of
patterns sequentially in many iterations to
guarantee the perfect recall of orthogonal
patterns [5, 6, 7, 8]. Additionally, new
weights of associations depend on old weights
in a direct way. Therefore, it takes a long time
to modify weights if old weights are far from
desired values. In other words, previous
BAMs recall non-orthogonal patterns weakly
and learn fixedly. In this paper, we propose an
iterative learning algorithm of BAM, which
learns more flexibly as well as improves the
ability of recall for non-orthogonal patterns.
We use MNTP to show the multiple training
strategy. In the proposed learning rule,
Nông Thị Hoa và Đtg Tạp chí KHOA HỌC & CÔNG NGHỆ 113(13): 61 - 65
62
weights of associations are updated more
flexibly in a few iterations. Moreover,
updating weights is performed iteratively
until satisfying conditions for recalling all
patterns correctly.
The rest of the paper is organized as follows.
The next section is overview of BAM. In
Section 3, we present the proposed learning
algorithm and some discussion. Section 4
shows experiment results and compare with
other models.
BIDIRECTIONAL ASSOCIATIVE MEMORY
BAM is a two-layer feedback neural network
model that introduced by Kosko [1]. As
shown in Figure 1, the input layer FA includes
n neurons a1, a2,..., an and the output layer FB
comprises m components b1, b2,..., bm. Now
we have A ={0,1}n and B = {0,1}m. BAM can
be denoted as a bi-directional mapping in
vector spaces W : Rn↔ Rm.
Figure 1: Structure of Bidirectional Associative
Memory
Learning process
Assume that BAM learns p pairs of patterns,
(A1, B1), , (Ap, Bp). Pairs of patterns are
stored in the correlation matrix as follows:
(1)
where Ak, Bk are the bipolar mode of the kth
pair of patterns.
A learning rule of BAM shows the multiple
training strategy [7]:
(2)
where qi is the minimum number of times for
training ith pair of patterns.
Recalling process
To retrieve one of the nearest (Ak, Bk) pair
from the network when any (α, β) pair is
presented as an initial condition to the
network. Starting with a value of (α, β)
determine a finite sequence (α’, β’), (α’’,
β’’),.until an equilibrium point (αf, βf) is
reached, where
(3)
(4)
(5)
(6)
(7)
Kosko proved that this process will converge
for any W. However, a pattern can be recalled
if and only if this pattern is a local minimum
of the energy surface [8].
Energy function
For any state (Ai, Bi), an energy function is
defined by
(8)
OUR APPROACH
As we discuss in Section 1, BAMs learn
fixedly and recall non-orthogonal patterns
weakly. Therefore, we propose a learning
algorithm with advantages overcome previous
BAMs. In the proposed learning algorithm,
patterns are learned flexibly until assuring
that all patterns are recalled correctly. Thus,
the ability of recalling of non-orthogonal
patterns is similar to orthogonal patterns.
Y.F. Wang et al. used MNTP to show the
multiple training strategy. An explicit
expression of MNTTP is proposed to
Nông Thị Hoa và Đtg Tạp chí KHOA HỌC & CÔNG NGHỆ 113(13): 61 - 65
63
guarantee recall of patterns. This expression
shows that energy of each pair of training
patterns is smaller than energy of all neighbor
patterns.
The proposed learning algorithm determines
MNTP in a few iterations. The learning
process is performed until energy of all pairs
of patterns is approximately equal to 0. For
artificial neural networks, if energy of a state
is equal to 0 then neural networks converge to
a global minimum. It means that each pair of
patterns is corresponded to a state whose
energy is very nearest a global minimum.
Therefore, all pairs of patterns can be recalled
correctly and the ability of recalling of
patterns is approximately equal.
In Section 2, Equation (2) and (8) show that
MNTP affect weights of associations and
energy function. We analyze the relationship
between the energy function and MNTP.
Then, we show our learning algorithm.
Relationship between the energy function
and MNTP
BAM stores p pattern pairs. Pattern pair (Ai,
Bi) is presented as follow:
and . Relationship between
the energy function and MNTP is established
from Equation (2) and (8) as follow:
From Equation (2), we computed W by the
following equation:
(9)
From Equation (8) and (9), Ei is formulated as
follow:
(10)
Equation (10) shows that the absolute value
of Ei decreases when each qk drops
Improved learning algorithm
Our learning algorithm updates MNTP
flexibly after each iteration until energy of all
pairs of patterns is approximately equal to 0.
This algorithm uses some variable as follow:
Assuming BAM stores p pattern pairs.
- qi be MNTP of ith pair of patterns, i=1...p
- W be matrix storing weights of associations
- Ei be energy of ith pair of patterns, i=1...p
Proposed learning algorithm consists of two
following steps:
Step 1: sets up initial values of MNTTP.
• Set up qi=1 where i=1,..,p to get original
correlation matrix in Equ (1).
Step 2: performs weight updating iteratively:
• Formulate W by Equ (2).
• Then, compute Ei by Equ (8) where i=1,..,p.
• Based on value of Ei, update qi.
until| Ei| ≅0 where i=1,...,p and | x | is the
absolute value of x.
As we analyze in Section 4.1, the absolute
value of Ei decreases when each qk drops.
Therefore, we proposed rules for updating qi
as follow:
R1: if | Ei | ≅0, do not change qi
.
R2: if NOT | Ei| ≅0, decrease qi for | Ei | ≅0.
Discussion
Our learning algorithm has two advantages
overcome previous BAMs, including
• Learning process is flexible because qi can
be dropped after each iteration to decrease Ei.
Moreover, new weights depend on the old
connection weights indirectly. Thus, FBAM
does not take a long time to modify old
weights when old weights are far from
desired values.
• Non-orthogonal patterns are recalled more
effectively because the ability of recalling of
non-orthogonal patterns is similar to
orthogonal patterns.
Additionally, proposed learning algorithm is
easy to understand and implement.
Nông Thị Hoa và Đtg Tạp chí KHOA HỌC & CÔNG NGHỆ 113(13): 61 - 65
64
EXPERIMENTS
We have conducted experiments in five
recognition applications with auto-association
mode for recognizing fingerprint, means of
transport, coin, signal panels of transport, and
handwriting characters. Figure 2 shows
training images for experiments. Training and
noisy images are downsized before converted
to a vector.
(a) (b)
(c) (d)
(e)
Figure 2. Training images for five experiments
We select 10 images from Fingerprint
database of Olympic Competition in
Information Technology (Fig 2-a), 10 training
images from Google (Fig 2-b), 20 training
images from coin database of USA (Fig 2-c),
20 images from Google (Fig 2-d), 52 training
images from UJIpenchars Database (Fig 2-e).
For each experiment, 10 noisy images are
made from each training image by deleting
some pixels in a random way.
The ability of recalling of FBAM is compared
to other BAMs with multiple training
strategy. BAMs are implemented, namely,
BAM of Tao Wang (TBAM) [4] BAM of
Xinhua Zhuang (XBAM) [5], BAM of
Y.F.Wang (WBAM) [6], and FBAM. The
ability of recalling of BAMs is determined by
percentages of pixels which are correctly
recalled. Table 1 shows the percentages of
pixels recalling successfully of BAMs. Data
from Table 1 show that FBAM is the best
model in all experiments.
In conclusion, we conduct five experiments
with different image sets. Results experiments
show that FBAM recalls better than other
BAM in auto-association mode. Moreover,
the ability of recalling of FBAM significantly
increases when content of training patterns
are greatly different.
CONCLUSION
In this paper, we proposed an improved
learning algorithm for BAMs. Our learning
algorithm learns patterns more flexibly.
Weights of associations are updated flexibly
in a few iterations based on changing of
MNTP. Moreover, FBAM recalled effectively
for non-orthogonal patterns. We conduct
experiments in pattern recognition
applications to prove the effectiveness of
FBAM. Results of experiments show that
FBAM recalls better than other BAMs in
auto-association mode.
FBAM recall better when content of patterns
are significantly different. Therefore, we will
investigate to develop this advantage for
FBAM in the future.
Acknowledgements. This work was supported
by Vietnam’s National Foundation for Science
and Technology Development (NAFOSTED)
under Granted Number 102.02-2011.13.
Table 1. Percentages of pixels recalling successfully
Recognition Applications WBAM TBAM XBAM FBAM
Fingerprint 83.370 85.906 85.906 88.007
Handwriting characters 75.463 75.681 72.964 75.890
Signal panels of transport 77.980 28.303 78.303 78.348
Coin 85.066 45.992 84.896 85.109
Means of transport 88.110 18.960 90.076 90.076
Nông Thị Hoa và Đtg Tạp chí KHOA HỌC & CÔNG NGHỆ 113(13): 61 - 65
65
REFERENCES
[1].B.Kosko, “Bidirectional Associative
Memory,” IEEE Transactions on on Systems,
Man, and Cybernetic, vol. 18, no. 1, pp. 49–60,
1988.
[2].D. Shen and J. B. Cruz, “Encoding strategy for
maximum noise tolerance Bidirectional
Associative Memory,” IEEE Transactions on
Neural Networks, 2003.
[3].T. Wang and X. Zhuang, “Weighted Learning
of Bidirectional Associative Memories by Global
Minimization,” IEEE Transactions on Neural
Networks, vol. 3, no. 6, pp. 1010–1018, 1992.
[4].T. Wang, X. Zhuang, and X. Xing, “Memories
with Optimal Stability,” IEEE transactions on
neural networks, vol. 24, no. 5, , pp. 778–790,
1994.
[5].X. Zhuang, Y. Huang, and S.-S.Chen, “Better
learning for bidirectional associative memory,”
Neural Networks, vol. 6, no. 8, pp. 1131–1146,
1993.
[6].Y. F. Wang, J. R. Cruz, and J. R. Mulligan,
“On multiple training for bidirectional associative
memory.,” IEEE transactions on neural networks /
a publication of the IEEE Neural Networks
Council, vol. 1, no. 3, pp. 275–276, 1990.
[7].Y. F. Wang, J. R. Cruz, and J. R. Mulligan,
“Guaranteed recall of all training pairs for BAM,”
IEEE transactions on Neural Networks, vol. 2, no.
6, pp. 559-566, 1991.
[8].Y. F. Wang, J. R. Cruz, and J. R. Mulligan,
“Two coding strategies for bidirectional
associative memory.,” IEEE transactions on
neural networks / a publication of the IEEE
Neural Networks Council, vol. 1, no. 1, pp. 81–92,
1990.
TÓM TẮT
MỘT THUẬT TOÁN HỌC CẢI TIẾN CỦA BỘ NHỚ LIÊN KẾT HAI CHIỀU
Nông Thị Hoa1*, Bùi Thế Duy2
1Trường ĐH Công nghệ thông tin và Truyền thông – ĐH Thái Nguyên
2Phòng thí nghiệm tương tác người máy – ĐH Quốc gia Hà Nội
Fuzzy neural network is an artificial neural network that combines fuzzy concepts, fuzzy inference
rule Các mạng nơ ron nhân tạo, được đặc trưng bởi sự song song hóa, tính mạnh mẽ và khả năng
học, có rất nhiều ứng dụng trong nhiều lĩnh vực khác nhau. Bộ nhớ liên kết hai chiều (BAM) là
một mạng nơ ron được mở rộng từ các mạng nơ ron Hopfield để tạo ra một tìm kiếm hai chiều
cho một cặp mẫu. Ưu điểm quan trọng nhất của BAM là nhớ lại các mẫu đã lưu từ các mẫu vào
nhiễu. Tuy nhiên quá trình học của các BAM trước đây lại không linh động. Hơn nữa, các cặp mẫu
trực giao được nhớ lại hiệu quả hơn các cặp mẫu không trực giao. Nghĩa là, một số mẫu quan
trọng không thể nhớ lại được. Trong bài báo này, chúng tôi đưa ra một thuật toán học của BAM
mà học các dữ liệu huấn luyện linh động hơn đồng thời cải thiện khả năng nhớ lại đối với các mẫu
không trực giao. Trong thuật toán học đưa ra, sự liên kết của các mẫu được cập nhật linh động
trong một số ít lần lặp bằng cách điều chỉnh các tham số sau mỗi lần lặp. Hơn nữa, thuật toán học
của chúng tôi còn đảm bảo khả năng nhớ lại của các mẫu là như nhau. Điều này được thể hiện
trong điều kiện dừng của quá trình học. Chúng tôi làm thực nghiệm với năm tập dữ liệu để chứng
minh tính hiệu quả của BAM gắn với thuật toán học đưa ra (FBAM). Kết quả thực nghiệm cho
thấy FBAM nhớ lại tốt hơn các BAM khác trong chế độ tự liên kết.
Từ khóa: Bộ nhớ liên kết hai chiều, bộ nhớ liên kết, thuật toán học.
Ngày nhận bài: 15/9/2013; Ngày phản biện: 24/10/2013; Ngày duyệt đăng: 18/11/2013
Phản biện khoa học: PGS.TS. Nguyễn Việt Hà – Đại học Quốc gia Hà Nội
*
Tel: 01238492484
Các file đính kèm theo tài liệu này:
- brief_41692_45462_165201410422710_4142_2048585.pdf