The Eigenvalues and Eigenvectors of Tridiagonal Toeplitz Matrix「建议收藏」

The Eigenvalues and Eigenvectors of Tridiagonal Toeplitz Matrix「建议收藏」一般来说,我们都是先求一个矩阵的特征值,然后再求它的特征向量。但对于某种特殊的矩阵来说,先求特征向量反而更加方便,现在就让我们一起来看下吧!

大家好,又见面了,我是你们的朋友全栈君。如果您正在找激活码,请点击查看最新教程,关注关注公众号 “全栈程序员社区” 获取激活教程,可能之前旧版本教程已经失效.最新Idea2022.1教程亲测有效,一键激活。

Jetbrains全家桶1年46,售后保障稳定

The Eigenvalues and Eigenvectors of Tridiagonal Toeplitz Matrix

by Changyu Zhou  


Abstract

 Usually, the eigenvalues are calculated first, then are the eigenvectors. But for a special kind of matrix: Tridiagonal Toeplitz Matrix, the eigenvectors will be found at first. Want to know how? Keep reading!

Details

 We investigate a special kind of   n × n \,\small n\times n n×n real matrix——Tridiagonal Toeplitz Matrix:
M = (    a    b    c ⋱ ⋱ ⋱    a    b       c    a    ) n × n ( b , c ≠ 0 ) M= \begin{pmatrix} \,\,a & \,\,b & & \\ \,\,c & \ddots & \ddots & \\ & \ddots & \,\,a & \,\,b\,\, \\ & & \,\,c & \,\,a\,\, \end{pmatrix}_{n\times n}\quad (b,c\neq 0) M=acbacban×n(b,c=0)

 Before dealing with this kind of matrix, let’s look a simpler one, which is symmetric as follow:
M 1 = (    a    b    b ⋱ ⋱ ⋱    a    b       b    a    ) = a I + b   T 1 ( b ≠ 0 ) M_1= \begin{pmatrix} \,\,a & \,\,b & & \\ \,\,b & \ddots & \ddots & \\ & \ddots & \,\,a & \,\,b\,\, \\ & & \,\,b & \,\,a\,\, \end{pmatrix} =aI+b\,T_1\quad (b\neq 0) M1=abbabba=aI+bT1(b=0)

with
T 1 = (    0    1    1 ⋱ ⋱ ⋱    0    1       1    0    ) T_1= \begin{pmatrix} \,\,0 & \,\,1 & & \\ \,\,1 & \ddots & \ddots & \\ & \ddots & \,\,0 & \,\,1\,\, \\ & & \,\,1 & \,\,0\,\, \end{pmatrix} T1=0110110

 We can know that if λ \small \lambda λ and v \small v v satisfy
M 1 v = λ v , v ≠ 0 M_1v=\lambda v,\quad v\neq 0 M1v=λv,v=0

It can be inferred that
M 1 v = ( a I + b   T 1 ) v = a v + b   T 1 v = λ v M_1v=(aI+b\,T_1)v=av+b\,T_1v=\lambda v M1v=(aI+bT1)v=av+bT1v=λv

after simplifying,
T 1 v = λ − a b v T_1v=\frac{\lambda-a}{b}v T1v=bλav

so λ ′ = λ − a b \small \displaystyle \lambda’=\frac{\lambda-a}{b} λ=bλa is the eigenvalue of   T 1 \,\small T_1 T1.

 Similarly, if λ ′ \small \lambda’ λ is the eigenvalue of   T 1 \,\small T_1 T1, then λ ′ b + a \small \lambda’b+a λb+a is the eigenvalue of M 1 \small M_1 M1.

 In other words, M 1 \small M_1 M1 and T 1 \small T_1 T1 have the same eigenvectors and their respective eigenvalues are related.

 So, to find M 1 \small M_1 M1‘s eigenvectors and eigenvalues, it is sufficient to focus on the solution of the simpler matrix T 1 \small T_1 T1‘s eigenvectors and eigenvalues.

 Usually, the eigenvalues are calculated first, then are the eigenvectors. But for T 1 \small T_1 T1, it is simpler to first find the eigenvectors. Assume that λ \small \lambda λ is T 1 \small T_1 T1‘s eigenvalue and v v v is the corresponding eigenvector. With hindsight, it’s convenient to write λ = 2 c \small \lambda =2c λ=2c, then
( T − λ I ) v = (    − 2 c    1    1 ⋱ ⋱ ⋱    − 2 c    1       1    − 2 c    ) ( v 1 v 2 ⋮ v n − 1 v n ) = ( − 2 c v 1 + v 2 v 1 − 2 c v 2 + v 3 ⋮ v n − 2 − 2 c v n − 1 + v n v n − 1 − 2 c v n ) = 0 (T-\lambda I)v= \begin{pmatrix} \,\,-2c & \,\,1 & & \\ \,\,1 & \ddots & \ddots & \\ & \ddots & \,\,-2c & \,\,1\,\, \\ & & \,\,1 & \,\,-2c\,\, \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \\ \vdots \\ v_{n-1} \\ v_n \end{pmatrix}= \begin{pmatrix} -2cv_1+v_2 \\ v_1-2cv_2+v_3 \\ \vdots \\ v_{n-2}-2cv_{n-1}+v_n \\ v_{n-1}-2cv_n \end{pmatrix} =0 (TλI)v=2c112c112cv1v2vn1vn=2cv1+v2v12cv2+v3vn22cvn1+vnvn12cvn=0

 By introducing v 0 = 0 , v n + 1 = 0 \small v_0=0,v_{n+1}=0 v0=0,vn+1=0, the forms come into the same:
v k − 1 − 2 c v k + v k + 1 = 0 , k = 1 , 2 , ⋯   , n v_{k-1}-2cv_k+v_{k+1}=0,k=1,2,\cdots,n vk12cvk+vk+1=0,k=1,2,,n

v k + 1 = 2 c v k − v k − 1 v_{k+1}=2cv_k-v_{k-1} vk+1=2cvkvk1

two sides subtract r v k \small rv_k rvk
v k + 1 − r v k = ( 2 c − r ) v k − v k − 1 v_{k+1}-rv_k=(2c-r)v_k-v_{k-1} vk+1rvk=(2cr)vkvk1

Let a k = v k + 1 − r v k \small a_k=v_{k+1}-rv_k ak=vk+1rvk, to form a geometric progression { a k } \small \{a_k\} {
ak}
, the coefficents must satisfy

1 : 2 c − r = − r : − 1 1:2c-r=-r:-1 1:2cr=r:1

that’s to say,
r 2 − 2 c r + 1 = 0 r^2-2cr+1=0 r22cr+1=0

 Denote two roots as r 1 , r 2 \small r_1,r_2 r1,r2, there are
r 1 + r 2 = 2 c , r 1 r 2 = 1 r_1+r_2=2c,r_1r_2=1 r1+r2=2c,r1r2=1

It can be induced that if r 1 ≠ r 2 \small r_1\neq r_2 r1=r2, or in other words, c 2 − 1 ≠ 0 \small c^2-1\neq 0 c21=0,
v k = c 1 r 1 k + c 2 r 2 k v_k=c_1r_1^k+c_2r_2^k vk=c1r1k+c2r2k

 If r 1 = r 2 = r \small r_1=r_2=r r1=r2=r,
v k = ( c 1 + c 2 k ) r k v_k=(c_1+c_2k)r^k vk=(c1+c2k)rk

with c 1 \small c_1 c1 and c 2 \small c_2 c2 are constants.

 In the first case,
v k = c 1 r 1 k + c 2 r 2 k v_k=c_1r_1^k+c_2r_2^k vk=c1r1k+c2r2k

v 0 = c 1 + c 2 = 0 v_0=c_1+c_2=0 v0=c1+c2=0

then,
v k = c 1 ( r 1 k − r 2 k ) v_k=c_1(r_1^k-r_2^k) vk=c1(r1kr2k)

v n + 1 = c 1 ( r 1 n + 1 − r 2 n + 1 ) = 0 v_{n+1}=c_1(r_1^{n+1}-r_2^{n+1})=0 vn+1=c1(r1n+1r2n+1)=0

Since v ≠ 0 \small v\neq 0 v=0,
r 1 n + 1 = r 2 n + 1 = r 1 − n − 1 ⇒ r 1 2 ( n + 1 ) = 1 r_1^{n+1}=r_2^{n+1}=r_1^{-n-1} \Rightarrow r_1^{2(n+1)}=1 r1n+1=r2n+1=r1n1r12(n+1)=1

so ∣ r 1 ∣ = 1 \small |r_1|=1 r1=1, denote r 1 \small r_1 r1 as e i θ \small e^{i\theta} eiθ, we have
v k = c 1 ( e i k θ − e − i k θ ) = 2 c 1 i sin ⁡ ( k θ ) v_k=c_1(e^{ik\theta}-e^{-ik\theta})=2c_1i\sin(k\theta) vk=c1(eikθeikθ)=2c1isin(kθ) e i 2 ( n + 1 ) θ = 1 = e i 2 j π e^{i2(n+1)\theta}=1=e^{i2j\pi} ei2(n+1)θ=1=ei2jπ

Since v k ≠ 0 \small v_k\neq 0 vk=0, it can be inferred k θ ≠ 2 m π , m ∈ Z \small k\theta\neq 2m\pi,m\in Z kθ=2mπ,mZ, therefore j ≠ 0 , n + 1 \small j\neq 0, n+1 j=0,n+1,
θ j = j π n + 1 , j = 1 , 2 , ⋯   , n \theta_j=\frac{j\pi}{n+1},j=1,2,\cdots,n θj=n+1jπ,j=1,2,,n

Let c 1 = 1 / 2 i \small c_1=1/2i c1=1/2i, then
v k = sin ⁡ ( k θ j ) v_k=\sin(k\theta_j) vk=sin(kθj)

V j = ( sin ⁡ ( θ j ) sin ⁡ ( 2 θ j ) ⋮ sin ⁡ ( ( n − 1 ) θ j ) sin ⁡ ( n θ j ) ) V_j= \begin{pmatrix} \sin(\theta_j) \\ \sin(2\theta_j) \\ \vdots \\ \sin\big((n-1)\theta_j\big) \\ \sin(n\theta_j) \\ \end{pmatrix} Vj=sin(θj)sin(2θj)sin((n1)θj)sin(nθj)

corresponding eigenvalue is
λ j = 2 c = r 1 + r 2 = e i θ j + e − i θ j = 2 cos ⁡ ( θ j ) \lambda_j=2c=r_1+r_2=e^{i\theta_j}+e^{-i\theta_j}=2\cos(\theta_j) λj=2c=r1+r2=eiθj+eiθj=2cos(θj)

with
θ j = j π n + 1 , j = 1 , 2 , ⋯   , n \theta_j=\frac{j\pi}{n+1},j=1,2,\cdots,n θj=n+1jπ,j=1,2,,n
 As for the second case: r 1 = r 2 = r = c \small r_1=r_2=r=c r1=r2=r=c,
    v k = ( c 1 + c 2 k ) r k ⇒ v 0 = c 1 = 0 ⇒ v k = c 2 k r k ⇒ v n + 1 = c 2 ( n + 1 ) r n + 1 = 0 \begin{aligned} &\quad \,\,\,v_k=(c_1+c_2k)r^k\\& \Rightarrow v_0=c_1=0\\& \Rightarrow v_k=c_2kr^k\\& \Rightarrow v_{n+1}=c_2(n+1)r^{n+1}=0 \end{aligned} vk=(c1+c2k)rkv0=c1=0vk=c2krkvn+1=c2(n+1)rn+1=0

then
c 2 = 0 ⇒ v k = 0 ⇒ v = 0 c_2=0 \Rightarrow v_k=0 \Rightarrow v=0 c2=0vk=0v=0

which is not consistent with v ≠ 0 \small v\neq 0 v=0.

 So the eigenvalues of   T 1 \,\small T_1 T1 are
λ j = 2 cos ⁡ ( j π n + 1 ) , j = 1 , 2 , ⋯   , n \lambda_j=2\cos(\frac{j\pi}{n+1}),j=1,2,\cdots,n λj=2cos(n+1jπ),j=1,2,,n

with corresponding eigenvectors
V j = ( sin ⁡ ( 1 n + 1 j π ) sin ⁡ ( 2 n + 1 j π ) ⋮ sin ⁡ ( n − 1 n + 1 j π ) sin ⁡ ( n n + 1 j π ) ) V_j= \begin{pmatrix} \sin(\displaystyle\frac{1}{n+1}j\pi) \\\\ \sin(\displaystyle\frac{2}{n+1}j\pi) \\ \vdots \\ \sin(\displaystyle\frac{n-1}{n+1}j\pi) \\\\ \sin(\displaystyle\frac{n}{n+1}j\pi) \\ \end{pmatrix} Vj=sin(n+11jπ)sin(n+12jπ)sin(n+1n1jπ)sin(n+1njπ)

It can be easily proved that   V i T V j = 0 , i ≠ j . \,\footnotesize V_i^TV_j=0,i\neq j. ViTVj=0,i=j. (Hint: using Euler’s formula)

 Let V = ( V 1 , V 2 , ⋯   , V n ) , D = d i a g ( λ 1 , λ 2 , ⋯   , λ n ) \small V=(V_1,V_2,\cdots,V_n), D=diag(\lambda_1,\lambda_2,\cdots,\lambda_n) V=(V1,V2,,Vn),D=diag(λ1,λ2,,λn), we have
T 1 V = V D o r V − 1 T 1 V = D T_1V=VD \quad or \quad V^{-1}T_1V=D T1V=VDorV1T1V=D
 Therefore, the eigenvalues of M 1 \small M_1 M1 are
λ j = a + 2 b cos ⁡ ( j π n + 1 ) , j = 1 , 2 , ⋯   , n \lambda_j=a+2b\cos(\frac{j\pi}{n+1}),j=1,2,\cdots,n λj=a+2bcos(n+1jπ),j=1,2,,n

the eigenvectors are the same as those of   T 1 \small \,T_1 T1.

At the moment, story just comes into the half.

 Construct M 2 \small M_2 M2 as follow:
M 2 = (    a    b   − b ⋱ ⋱ ⋱    a    b      − b    a    ) n × n = a   I + b   T 2 ( b ≠ 0 ) M_2= \begin{pmatrix} \,\,a & \,\,b & & \\ \,-b & \ddots & \ddots & \\ & \ddots & \,\,a & \,\,b\,\, \\ & & \,-b & \,\,a\,\, \end{pmatrix}_{n\times n} =a\,I+b\,T_2\quad (b\neq 0) M2=abbabban×n=aI+bT2(b=0)
T 2 = (    0    1   − 1 ⋱ ⋱ ⋱    0    1      − 1    0    ) T_2= \begin{pmatrix} \,\,0 & \,\,1 & & \\ \,-1 & \ddots & \ddots & \\ & \ddots & \,\,0 & \,\,1\,\, \\ & & \,-1 & \,\,0\,\, \end{pmatrix} T2=0110110

 The eigenvalues of T 2 \small T_2 T2 can be found in the same way.
λ j = 2 i cos ⁡ ( j π n + 1 ) , j = 1 , 2 , ⋯   , n \lambda_j=2i\cos(\frac{j\pi}{n+1}),j=1,2,\cdots,n λj=2icos(n+1jπ),j=1,2,,n

so the eigenvalues of M 2 \small M_2 M2 are
λ j = a + i   2 b cos ⁡ ( j π n + 1 ) , j = 1 , 2 , ⋯   , n \lambda_j=a+i\,2b\cos(\frac{j\pi}{n+1}),j=1,2,\cdots,n λj=a+i2bcos(n+1jπ),j=1,2,,n

 At last, consider
M = (    a    b    c ⋱ ⋱ ⋱    a    b       c    a    ) ( b , c ≠ 0 ) = c (    0       1 ⋱ ⋱    0         1    0    ) + b (    0    1    ⋱ ⋱    0    1          0    ) + a I \begin{aligned} M&= \begin{pmatrix} \,\,a & \,\,b & & \\ \,\,c & \ddots & \ddots & \\ & \ddots & \,\,a & \,\,b\,\, \\ & & \,\,c & \,\,a\,\, \end{pmatrix}\quad (b,c\neq 0)\\&= c \begin{pmatrix} \,\,0 & \,\, & & \\ \,\,1 & \ddots & & \\ & \ddots & \,\,0 & \,\,\,\, \\ & & \,\,1 & \,\,0\,\, \end{pmatrix} +b \begin{pmatrix} \,\,0 & \,\,1 & & \\ \,\, & \ddots & \ddots & \\ & & \,\,0 & \,\,1\,\, \\ & & \,\, & \,\,0\,\, \end{pmatrix}+aI \end{aligned} M=acbacba(b,c=0)=c01010+b01010+aI

 Let
S = d i a g ( 1 , ( ∣ b ∣ ∣ c ∣ ) 1 / 2 , ⋯   , ( ∣ b ∣ ∣ c ∣ ) ( n − 1 ) / 2 ) = (    1 ( ∣ b ∣ ∣ c ∣ ) 1 / 2 ⋱    ( ∣ b ∣ ∣ c ∣ ) ( n − 1 ) / 2 ) n × n S=diag\Big(1,(\displaystyle\frac{|b|}{|c|})^{1/2},\cdots,(\displaystyle\frac{|b|}{|c|})^{(n-1)/2}\Big)= \begin{pmatrix} \,\,1 & & & \\ & (\displaystyle\frac{|b|}{|c|})^{1/2} & & \\ & & \ddots & \\ & & \,\, &(\displaystyle\frac{|b|}{|c|})^{(n-1)/2} \end{pmatrix}_{n\times n} S=diag(1,(cb)1/2,,(cb)(n1)/2)=1(cb)1/2(cb)(n1)/2n×n

then
S − 1 = d i a g ( 1 , ( ∣ c ∣ ∣ b ∣ ) 1 / 2 , ⋯   , ( ∣ c ∣ ∣ b ∣ ) ( n − 1 ) / 2 ) = (    1 ( ∣ c ∣ ∣ b ∣ ) 1 / 2 ⋱    ( ∣ c ∣ ∣ b ∣ ) ( n − 1 ) / 2 ) n × n S^{-1} =diag\Big(1,(\displaystyle\frac{|c|}{|b|})^{1/2},\cdots,(\displaystyle\frac{|c|}{|b|})^{(n-1)/2}\Big)= \begin{pmatrix} \,\,1 & & & \\ & (\displaystyle\frac{|c|}{|b|})^{1/2} & & \\ & & \ddots & \\ & & \,\, &(\displaystyle\frac{|c|}{|b|})^{(n-1)/2} \end{pmatrix}_{n\times n} S1=diag(1,(bc)1/2,,(bc)(n1)/2)=1(bc)1/2(bc)(n1)/2n×n

therefore
S M S − 1 = c S (    0    1 ⋱ ⋱    0    1    0    ) S − 1 + b S (    0    1 ⋱ ⋱    0    1       0    ) S − 1 + a S I S − 1 = c (    0 ( ∣ b ∣ ∣ c ∣ ) 1 / 2 ⋱ ⋱    0 ( ∣ b ∣ ∣ c ∣ ) 1 / 2    0    ) + b (    0 ( ∣ c ∣ ∣ b ∣ ) 1 / 2 ⋱ ⋱    0 ( ∣ c ∣ ∣ b ∣ ) 1 / 2    0    ) + a I = sgn ( c ) ∣ b c ∣ (    0    1 ⋱ ⋱    0    1    0    ) + sgn ( b ) ∣ b c ∣ (    0    1 ⋱ ⋱    0    1       0    ) + a I \begin{aligned} SMS^{-1}&=cS \begin{pmatrix} \,\,0 & & & \\ \,\,1 & \ddots & & \\ & \ddots & \,\,0 & \\ & & \,\,1 & \,\,0\,\, \end{pmatrix} S^{-1}+bS \begin{pmatrix} \,\,0 & \,\,1 & & \\ & \ddots & \ddots & \\ & & \,\,0 & \,\,1\,\, \\ & & & \,\,0\,\, \end{pmatrix} S^{-1}+aSIS^{-1} \\\\&= c \begin{pmatrix} \,\,0 & & & \\ (\displaystyle\frac{|b|}{|c|})^{1/2} & \ddots & & \\ & \ddots & \,\,0 & \\ & & (\displaystyle\frac{|b|}{|c|})^{1/2} & \,\,0\,\, \end{pmatrix} +b \begin{pmatrix} \,\,0 & (\displaystyle\frac{|c|}{|b|})^{1/2} & & \\ \\ & \ddots & \ddots & \\ & & \,\,0 & (\displaystyle\frac{|c|}{|b|})^{1/2} \\ \\ & & & \,\,0\,\, \end{pmatrix} +aI\\\\&= \textrm{sgn}(c)\sqrt{|bc|} \begin{pmatrix} \,\,0 & & & \\ \,\,1 & \ddots & & \\ & \ddots & \,\,0 & \\ & & \,\,1 & \,\,0\,\, \end{pmatrix}+ \textrm{sgn}(b)\sqrt{|bc|} \begin{pmatrix} \,\,0 & \,\,1 & & \\ & \ddots & \ddots & \\ & & \,\,0 & \,\,1\,\, \\ & & & \,\,0\,\, \end{pmatrix}+aI \end{aligned} SMS1=cS01010S1+bS01010S1+aSIS1=c0(cb)1/20(cb)1/20+b0(bc)1/20(bc)1/20+aI=sgn(c)bc
01010+sgn(b)bc
01010+aI

 If sgn ( b ) = sgn ( c ) \small \textrm{sgn}(b)=\textrm{sgn}(c) sgn(b)=sgn(c), then
M 3 = S M S − 1 = a I + sgn ( b ) b c (    0    1    1 ⋱ ⋱ ⋱    0    1       1    0    ) = a I + sgn ( b ) b c   T 1 M_3=SMS^{-1}=aI+\textrm{sgn}(b)\sqrt{bc} \begin{pmatrix} \,\,0 & \,\,1 & & \\ \,\,1 & \ddots & \ddots & \\ & \ddots & \,\,0 & \,\,1\,\, \\ & & \,\,1 & \,\,0\,\, \end{pmatrix}= aI+\textrm{sgn}(b)\sqrt{bc}\,T_1 M3=SMS1=aI+sgn(b)bc
0110110=
aI+sgn(b)bc
T1

so M ∼ M 3 \small M \thicksim M_3 MM3, the eigenvalues of M \small M M are the same as the those of M 3 \small M_3 M3.

 According to the former part, the eigenvalues of M 3 \small M_3 M3 are
λ j = a + 2   sgn ( b ) b c cos ⁡ ( j π n + 1 ) , j = 1 , 2 , ⋯   , n \lambda_j=a+2\,\textrm{sgn}(b)\sqrt{bc}\cos(\frac{j\pi}{n+1}),j=1,2,\cdots,n λj=a+2sgn(b)bc
cos(n+1jπ),j=
1,2,,n

 If sgn ( b ) = − sgn ( c ) \small \textrm{sgn}(b)=-\textrm{sgn}(c) sgn(b)=sgn(c), then
M 4 = S M S − 1 = a I + sgn ( b ) − b c (    0    1   − 1 ⋱ ⋱ ⋱    0    1      − 1    0    ) = a I + sgn ( b ) − b c   T 2 M_4=SMS^{-1}=aI+\textrm{sgn}(b)\sqrt{-bc} \begin{pmatrix} \,\,0 & \,\,1 & & \\ \,-1 & \ddots & \ddots & \\ & \ddots & \,\,0 & \,\,1\,\, \\ & & \,-1 & \,\,0\,\, \end{pmatrix} =aI+\textrm{sgn}(b)\sqrt{-bc}\,T_2 M4=SMS1=aI+sgn(b)bc
0110110=
aI+sgn(b)bc
T2

so M ∼ M 4 \small M \thicksim M_4 MM4, the eigenvalues of M \small M M are the same as the those of M 4 \small M_4 M4.

λ j = a + i   2   sgn ( b ) − b c cos ⁡ ( j π n + 1 ) , j = 1 , 2 , ⋯   , n \lambda_j=a+i\,2\,\textrm{sgn}(b)\sqrt{-bc}\cos(\frac{j\pi}{n+1}),j=1,2,\cdots,n λj=a+i2sgn(b)bc
cos(n+1jπ),j=
1,2,,n

 The corresponding eigenvectors can be found by solving

( M − λ j I ) v = 0 (M-\lambda_jI)v=0 (MλjI)v=0
Acknowledgements

 Thank Mr. Yang for providing me with some useful materials about Tridiagonal Matrix[1].

References

 [1]. https://www.math.upenn.edu/~kazdan/AMCS602/tridiag-short.pdf
 [2]. https://en.wikipedia.org/wiki/Tridiagonal_matrix
 [3]. https://en.wikipedia.org/wiki/Toeplitz_matrix

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。

发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/234581.html原文链接:https://javaforall.cn

【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛

【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...

(0)


相关推荐

  • Gloand 2021 激活码【2021.10最新】

    (Gloand 2021 激活码)JetBrains旗下有多款编译器工具(如:IntelliJ、WebStorm、PyCharm等)在各编程领域几乎都占据了垄断地位。建立在开源IntelliJ平台之上,过去15年以来,JetBrains一直在不断发展和完善这个平台。这个平台可以针对您的开发工作流进行微调并且能够提供…

  • python 图片重命名_python批量重命名文件

    python 图片重命名_python批量重命名文件一个简单的python脚本,用于图片批量重命名,非常简单但是也非常使用!

  • 简述mybatis框架与hibernate框架的区别_hibernate 性能

    简述mybatis框架与hibernate框架的区别_hibernate 性能hibernate与mybatis的区别和特点hibernate是全自动,而mybatis是半自动。hibernate完全可以通过对象关系模型实现对数据库的操作,拥有完整的JavaBean对象与数据库的映射结构来自动生成sql。而mybatis仅有基本的字段映射,对象数据以及对象实际关系仍然需要通过手写sql来实现和管理。hibernate数据库移植性远大于mybatis。hibernate通过它强大的映射结构和hql语言,大大降低了对象与数据库(oracle、mysql等)的耦合性,

  • 冒泡排序的代码java_冒泡排序java代码实现

    冒泡排序的代码java_冒泡排序java代码实现publicclassBubbleSort{publicstaticint[]sort(int[]array){for(inti=1;iarray[j+1]){inttemp=array[j];array[j]=array[j+1];array[j+1]=temp;flag=false;}}System.out.println(flag);//如果为true,则说明排序已…

  • Java常用设计模式

    Java常用设计模式一、设计模式概念1、定义​Java包含23种设计模式,是一套对代码设计经验的总结,被人们反复利用,多人熟知的代码设计方式。2、目的​为了提高代码的可读性,可扩展性以及代码的复用性,为了解决在写代码过程中遇到的代码设计问题。3、设计模式的六大原则​3.1开闭原则​对扩展开放,对修改关闭(尽可能对代码少修改)​3.2里氏替换原则​它是面向对象基本原则之一,任何父类(基类)出现的地方,子类都可以出现,也就是子类可以替换父类的任何功能(体现了父类的可扩展性)3.3依赖

  • php下intval()和(int)转换使用与区别

    php下intval()和(int)转换使用与区别

    2021年11月10日

发表回复

您的电子邮箱地址不会被公开。

关注全栈程序员社区公众号