Comments (4)
in equation 2, it is not mentioned Trace (G.G^h) = P. it said that Trace(G.G^h) <= P . as I know, sometimes because of CSI, it is better not to use the whole BS power. I wonder how u handle that. if you try trace(G.G^h) = P, you can normalize G and multiple by sqrt(P) and I think it is not correct. I think the percentage of power usage in each step, should be the output of Actor-network.
from ris-miso-deep-reinforcement-learning.
I couldn't understand your question exactly. can you elaborate on that a bit, please?
anyways, I can describe what I've done in terms of the powers. so, there are two texts in the paper:
- "The state s^{(t)} at the time step t is determined by the transmission power at the t^{th} time step, the received power of users at the t^{th} time step, ...
- ... the transmission power for the k^{th} user...
based on both sentences, it should be understood that the authors included the power transmitted from the BS, and received power at the users. so, there should be K elements per power entity in a state vector because power is a scalar value of dBm, and there are K users. however, the paper indicates that the powers have real and imaginary parts. how come a power value can be complex? in the previous version of the repository, I exactly follow what is described in the paper. however, I noticed this mistake and solved it in the latest version.
to address the normalization you are mentioning, I just implemented Eq. (2) in the paper without the expectation operator: tr(Gx(Gx)^H) = Pt, per the constraint optimization.
I hope this will answer your questions.
from ris-miso-deep-reinforcement-learning.
I am not the author of the paper. I just implemented what is reported. The authors used trace(G.G^h) = P (see the last paragraph before section IV-B, on page 7). FYI, most of the time, trace(G.G^h) <= P is satisfied, I checked the output of the actor network many times. When not satisfied, normalization occurs. The normalization operates correctly, you can check it using debug mode. Also, the paper is full of mistakes, so what you are saying about incorrect issues may be true.
from ris-miso-deep-reinforcement-learning.
hi i am also confuse of the power, as you can see , the power is equal with Trace{G.G^h}, but when you compute the power in step part of environment , you use "np.real(np.diag(self.G.conjugate().T @ self.G)).reshape(1, -1) ** 2",
its mean P ** 2,
so i think the transmit power will be a wrong value,
could you tell me why you use the equation, maybe some knowledge i havent learn
thank you
from ris-miso-deep-reinforcement-learning.
Related Issues (20)
- Some questions about rewards in the training process HOT 2
- Generating Fig. 4 and 5 HOT 3
- How do I generate comparison data with a trained model? HOT 1
- Questions in 'DDPG.py' and 'environment.py' HOT 1
- Help need for Figure 4 and data you generated HOT 3
- Comparison algorithm problem
- Power conditions used to create figures 8, 11 and 12 HOT 5
- .npy file for reproduction for figure no 4? HOT 1
- Variation in channel at each time step. HOT 5
- Calculation of Unit Modulus constraint.
- Issue in regenerating figure 11
- Action dimension problem
- On the problem of drawing reward to steps diagram HOT 6
- transmit power handing HOT 7
- I have the doubt that i can't re current the results similar to the papers. HOT 3
- Optimization constraints HOT 1
- Unit modulus constraint of RIS HOT 4
- Rewards decrease in late training HOT 11
- What's the between sum_rate and reward? HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ris-miso-deep-reinforcement-learning.