Welcome, Guest
Username: Password: Remember me

TOPIC: Singular value decomposition - scaling

Singular value decomposition - scaling 2 years 9 months ago #39

  • mroth
  • mroth's Avatar
  • OFFLINE
  • New Member
  • Posts: 3
  • Karma: 0
Hi community,

I'm working on incorporating a model of a scattering medium into a simulation and I need to check, whether my model is correct. It is assumed that the conditions for random matrix theory to be a valid description are met. I apply SVD to compare to the results of Popoff et al. (doi.org/10.1088/1367-2630/13/12/123021). For SVD implementation I relied on the paper by Edelman&Wang (web.eecs.umich.edu/~rajnrao/Acta05rmt.pdf). My problem lies in the normaliziation of the singular values.

I assume normal distribution N(0,1) for both real and imaginary part of the transmission matrix. The matrix is normalized to the biggest absolute value, as I assume a passive transmission medium with no amplification possible.
Now, the quarter circle law distribution for the singular values is to be expected for this case of gamma=1. The probability distribution follows this curve, but I obtain an incorrect range for the singular values (quarter circle law predicts [0,2]).
After the scaling of K, the variance its elements will not be 1 anymore. So, I found the previously applied scaling factor, divided by sqrt(2), to be a working conversion to the [0,2] interval predicted by the quarter circle law.

However, the physical interpretation is not clear to me. By visualizing the singular value I want to verify, that I use a description previously shown to be valid for certain scattering media.

Would someone know, how to correctly scale the singular values in order to show, that the example follows the quarter circle law?
I'd be grateful for any hint.
Cheers, Matthias

Here's the numeric Python code I used:
from numpy import sqrt, angle, amax, sum
from numpy.random import randn
from scipy.linalg import svd
import matplotlib.pyplot as plt
m = 1024
gamma = 1
n = gamma * m

K = randn(m,n)+ 1j*randn(m,n) # transmission matrix
normFactor = amax(amax(abs(K))) # biggest amplitude element in K
K = K/normFactor # normalization
U,s,V = svd(K)

# different normalizations
sNorm1 = s/sqrt(m) # as in Edelman & Wang
sNorm2 = s/sqrt(sum(s**2)) # as in Popoff et al.
sNorm3 = normFactor/sqrt(2)*s/(sqrt(m)) # results in [0,2] interval (as expected)

#%% plotting
plt.figure()
plt.hist(sNorm1, bins = 50, normed = True)
plt.title('histogram of singular values, normalization 1/sqrt(m)')
plt.xlabel('normalized singular value')
plt.ylabel('probability density')

plt.figure()
plt.hist(sNorm2, bins = 50, normed = True)
plt.title('histogram of singular values, normalization 1/sqrt(sum(s**2))')
plt.xlabel('normalized singular value')
plt.ylabel('probability density')

plt.figure()
plt.hist(sNorm3, bins = 50, normed = True)
plt.title('histogram of singular values, normalization normFactor/sqrt(2)*/(sqrt(m))')
plt.xlabel('normalized singular value')
plt.ylabel('probability density')

plt.show()
Attachments:
Last Edit: 2 years 9 months ago by mroth.

Singular value decomposition - scaling 2 years 9 months ago #40

  • sebastien.popoff
  • sebastien.popoff's Avatar
  • OFFLINE
  • Administrator
  • Posts: 25
  • Thank you received: 9
  • Karma: 3
Hi Matthias,

I am not an expert in Random Matrix Theory but here are my remarks:

I think that there is a mistake in your first normalization.
You say that you normalized to the biggest absolute value, as you assume no amplification possible.

First of all, the maximum possible energy transmission is the maximum singular value, not the maximum matrix element squared.
If you normalize using the maximum singular value, you end up with a maximum transmission equal to one.
In general, this is not the case, it is usually much lower.

The RMT is not a microscopic theory in the sense that the results do not come from a theory that take into account the physics of the system.
The [0,2] range has not physical meaning, we just want to compare to a known distribution.

The normalization of the singular values allows to fit with this range.

The normalization sNorm1 = s/sqrt(m) as in Edelman & Wang seems correct for an initial distribution of the elements of K with a standard deviation equal to one.
Then, if you take
normFactor = std(K)
you end up with the expected [0,2] range.

The normalization from my paper is actually wrong, it should be
sNorm2 = s*sqrt(m)/sqrt(sum(s**2))
I corrected it in my thesis but the typo stayed in the paper.
With this normalization, the standard deviation of the initial distribution of the elements of K do not have to be one, so there is no need for the first normalization (with normFactor).


Please not that the Marcenko Pastur law (or quarter circle law in this very case) is not specific to random scattering and is not even true in the general case.
It is true only when the input/output are uncorrelated and when only a small fraction of the input/output channels are recorded.

If you want to simulate the total transmission matrix (all input/output channels) you would end up with the bimodal distribution.

However, in piratical applications, we are usually very close to the Marcenko Pastur distribution as the incomplete channel control and the optical losses kill the bimodal distribution.
(Transition between the two models is treated here, but it is a bit hardcore: journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.063901)

Best,

Sebastien

Singular value decomposition - scaling 2 years 9 months ago #41

  • mroth
  • mroth's Avatar
  • OFFLINE
  • New Member
  • Posts: 3
  • Karma: 0
Hi Sebastien,
thanks a lot for your reply! You're right, the underlying problem was my physical mis-interpretation of the scattering matrix. And with the correction you stated it of course works out to the quarter circle law in this specific case, independent of the variance of distribution.

One more thought on the normalization of the transmission matrix:
Given the situation, when we illuminate all N input pixels with unity amplitude, implement a given input mode n as a phase function and measure the response at all M output modes - the total recorded energy will be equal to the input (=N*1), or less. So, the mean of the M output pixel amplitudes will be <= N/M.

So, this measurement basically reflects one column of the transmission matrix (that is, if we measure amplitude and phase). And it would be the same for all other column measurements - the mean absolute value should be <= N/M = gamma.

So, could it be, that for the case of zero absorption we have to scale to mean(abs(k_ij)) = gamma? And for practical experiments the elements will of course have smaller absolute values.

My updated code looks as follows:
#%% calculations
from numpy import sqrt, angle, mean, sum
from numpy.random import randn
from scipy.linalg import svd
import matplotlib.pyplot as plt
m = 1024
gamma = 2
n = gamma * m

K = randn(m,n)+ 1j*randn(m,n) # transmission matrix
normFactor = mean(mean(abs(K))) # energy conservation (here zero absorption)
K = K/normFactor*gamma # normalization of transmission matrix
U,s,V = svd(K)

# singular value normalization
sNorm = s*sqrt(m)/sqrt(sum(s**2)) # as in Popoff et al. --> corrected

#%% plotting
plt.figure()
plt.hist(sNorm, bins = 50, normed = True)
plt.title('histogram of singular values, normalization sqrt(m)/sqrt(sum(s**2))')
plt.xlabel('normalized singular value')
plt.ylabel('probability density')

plt.figure()
plt.subplot(1,2,1)
plt.imshow(abs(K),cmap='gray')
plt.colorbar(fraction = 0.046, pad=0.04)
plt.title('Transmission matrix in amplitude and phase')

plt.subplot(1,2,2)
plt.imshow(angle(K),cmap='jet')
plt.colorbar(fraction = 0.046, pad=0.04)

plt.show()

In this case mean(mean(abs(K))) yields 2=gamma, which I would expect assuming "zero loss".

Cheers,
Matthias

Singular value decomposition - scaling 2 years 9 months ago #42

  • sebastien.popoff
  • sebastien.popoff's Avatar
  • OFFLINE
  • Administrator
  • Posts: 25
  • Thank you received: 9
  • Karma: 3
Hi Matthias,

I am note sure I understand your normalization.
You estimated that mean(abs(k_ij)) should be smaller than gamma, but why try to set mean(abs(k_ij)) = gamma?

First, by setting mean(mean(abs(k_ij))) = gamma, it is still possible that the output energy for one column is bigger than N, which would not be possible without gain.
(The losslessness seems to implies mean(abs(k_ij)) = gamma, but I do not think the inverse is true).

Moreover, as you say, it is usually much lower.
When you reach the multiple scattering regime, (i.e. when you can apply the random matrix theory), a significant part of the light (> 50%) is backscattered and thus not transmitted.

Basically, the way I usually normalize the matrix is by setting the mean transmission T=mean(abs(K)**2)=mean(s**2)/N to the value I want.
For example, to simulate a medium with a transmission of 10%, I would use:
normFactor=sqrt(mean(abs(K)**2))/sqrt(0.1)
K = K/normFactor

Best,

Sebastien
Last Edit: 2 years 5 months ago by sebastien.popoff.
The following user(s) said Thank You: mroth

Singular value decomposition - scaling 2 years 9 months ago #43

  • mroth
  • mroth's Avatar
  • OFFLINE
  • New Member
  • Posts: 3
  • Karma: 0
Hi Sebastien,

thanks for your reply! I admit, in the scattering context the assumption of "lossless" transmission doesn't make sense. Parametrization to a fixed transmission is surely a better idea, especially considering that this can be confirmed/measured in an experimental setup.

Cheers,
Matthias
Time to create page: 0.446 seconds

Additional information