

Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Theoretical assignment - ELBO, GANs
Typology: Assignments
1 / 3
This page cannot be seen from the preview
Don't miss anything!
A single Transformer layer consists of the following components: Multi-Head Self-Attention (MHSA), Layer Normalization, and Feedforward Neural Networks (MLP). The dimensions and details of these components are outlined below:
Input: X ∈ RH×W^ ×d. MHSA Weight Matrices: For Nh heads: W (^) Qh, W (^) Kh , W (^) Vh ∈ Rd×dh^ , where Nh · dh = d. MLP Architecture: Input is Yˆ , the output of MHSA with residual connection.
First Linear Transformation:
F = ReLU( Y Wˆ 1 + b 1 ), W 1 ∈ Rd×dff^ , b 1 ∈ Rdff^.
Second Linear Transformation:
YMLP = F W 2 + b 2 , W 2 ∈ Rdff×d, b 2 ∈ Rd.
Here, dff = 8d.
Assume that computational complexity is always measured in terms of scalar multiplications.
a) Derive the computational complexity for MHSA and MLP in the Transformer layer. Exclude the softmax operation in MHSA for the derivation. ( 2 marks)
b) Find the conditions on d, H, W such that MLP computations exceed those of MHSA. Is this possible in practical scenarios for image applications? ( 2 marks) c) Compare the number of parameters in MHSA and MLP layers, and determine which layer contributes more to the total parameter count in a Transformer layer. ( 1 mark)
( Sˆh)ij =
(Sh)ij , if j ∈ Patch(i), 0 , otherwise.
Only for this question, you can exclude computing query, key and value complexity.
a) Derive the computational complexity of MHSA with the above approximation. ( 2 marks)
b) Calculate the reduction in computations compared to original MHSA. ( 2 marks)
c) Is there a reduction in number of parameters for the layer? If yes, calculate the reduction. ( 1 mark)
W 1 ≈ A 1 B 1 , W 2 ≈ A 2 B 2 ,
where A 1 ∈ Rd×r^ , B 1 ∈ Rr×dff^ , A 2 ∈ Rdff×r^ , B 2 ∈ Rr×d, r << d.
a) Derive the computational complexity of the approximated MLP. ( 2 marks)
b) Compare the computational savings with the original MLP. ( 2 marks)
c) Is there a reduction in number of parameters for the layer? If yes, calculate the reduction. ( 1 mark)
Refer: LORA for practical applications of low-rank approximations in LLMs.
Given two density functions p(x) and q(x), we define the following divergence measures:
Total Variation Distance: DTV(p∥q) =
|p(x) − q(x)| dx
Kullback-Leibler (KL) Divergence:
DKL(p∥q) =
p(x) log
p(x) q(x) dx
Hellinger Distance:
DH(p∥q) =
sZ p p(x) −
p q(x)
dx
Derive and analyze the relationships between these measures:
DTV(p∥q) ≤
2 DH(p∥q).
(a) Show: − log(x + 1) ≥ −x for all x ≥ 0.
(b) Using the result in (a), prove: (^) √ 2 DH(p∥q) ≤
p DKL(p∥q), and hence: DTV(p∥q) ≤
p DKL(p∥q).
Food for Thought: Why are these inequalities significant? Can you think of a practical application for such inequalities?