-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tesla Solar-Only Dashboard #183
Comments
I have Solar-Only and can assist in validating |
@jasonacox and @apU823 - I like this idea and it is certainly possible I believe. 👍 I have some ideas/thoughts about this that I'll throw out there to help determine what options we have and how we might best go about implementing this. One thing I noted while developing the tesla-history tool was the ability to get "current" site data, i.e. solar generation etc. and the thought struck me that you could implement an alternative service to pypowerwall (or even make it configurable) that, instead of using the local gateway for the data source, could use the Tesla cloud service directly. Will come back to this later with a more detailed post though, once I've had some more time to document and test a few ideas I have. |
Hello, following up if any way to support the non-PW dashboard :) |
I created a placeholder for this effort: https://github.com/jasonacox/Powerwall-Dashboard/tree/main/tools/solar-only It duplicates all of the core Dashboard project files with edits to remove pypowerwall and telegraf. Ultimately, I would like to de-dupe all of this if we can get it to work and have it as an option in the main setup.sh. But for now, and for testing, we can start here. TO BE DONE: The setup creates the InfluxDB and Grafana services. The tesla-history script will need to be part of the setup.sh to get the Token, and then be converted to a service to poll the API every x minutes to pull in the updated data. Also, @mcbirse if you have a better idea, I'm happy to change course. |
Sorry @apU823 - I have not had time to test or investigate my ideas due to other projects taking all my time 😞 First step is trying to work out what changes we need to make to the tesla-history script. Are you able to run the script in login mode with the debug option? It would be interesting to see what data is returned from your Tesla account about your site (if anything) without any changes to the script first. You can do this as below. # Run tesla-history login with debug output
python3 tesla-history.py --login --debug If you could post your output that would be great. Make sure to remove your personal information however, such as email, site id & name, address, geolocation, and serial numbers, etc. Currently the tesla-history script uses the @jasonacox - I think your approach is a great start to try to get something working. I had some other ideas, but unfortunately I don't think I have time to work on this. My thoughts were to implement "live" monitoring rather than getting "historical" data from the Tesla cloud (like the tesla-history script does currently). Definitely possible, but more effort required. In it's most basic / brute force form however, if the tesla-history script can retrieve historical data for solar-only accounts, it would even be possible to simply run it from a cron job every five minutes (e.g. run |
Got the following error. forgot to run with debug on first so ran it a 2nd time
|
@apU823 - thanks for posting. This confirms we will need to change the script to use the I'll work on some changes to the tesla-history script. @jasonacox - I'll submit the updated script to the placeholder you have created, so we can keep these changes separate for now during testing and development. I think first steps are some minimal changes to the script to see what historical data can be retrieved from solar-only accounts, unless you had any other suggestions? |
Thanks @mcbirse ! I agree. |
Hi @apU823 - I have updated the tesla-history script which should hopefully 🤞 now support Tesla accounts with solar only products. I am unable to test the changes however, so it would be great to get some feedback from your testing. Please note - the updated script is available in this directory only: https://github.com/jasonacox/Powerwall-Dashboard/tree/main/tools/solar-only/tesla-history For testing this, you should be able to run Then when testing, make sure you are running the script from the "tools/solar-only/tesla-history" directory now. If you hadn't installed Powerwall-Dashboard, you may need to run If you can try some of the below tests and post the output, that would terrific (please remove any personal info though). # Run tesla-history login with debug output
python3 tesla-history.py --login --debug Hopefully, this will now show some information about your site, rather than returning "ERROR: No Tesla Energy sites found". We also need to find out what power history data is returned for solar only sites. So, if the above worked, then you can try the below command. This should return output required to help us determine what further changes may be required based on the response data. 🙏 # Retrieve power history data for all of yesterday
python3 tesla-history.py --test --force --debug --yesterday Looking forward to see how this goes! (and hoping it's not just a complete fail 😬 - very hard to know what will happen when unable to test this myself....) |
@mcbirse - did the git pull before i ran any of the scripts but still getting the same error:
This is what my tesal-history.conf file looks like:
|
I am also working with another developer here: https://github.com/jweier/SolarDataParser I've been able to use his code to output data (on a daily basis). Not sure if this would be helpful to somehow merge his code into this project |
@apU823 - It looks like you are running the original tesla-history script (which hasn't be changed). Before trying to run the test commands I listed, please change your working directory. cd ~/Powerwall-Dashboard/tools/solar-only/tesla-history |
i swear i did the git pull before i ran the code but now it did pull it correctly. the code ran
^^this was the last time slot it pulled the data for saw this at the end of my output
does this mean it did not write to the db? whats next? would you like me to share the terminal output? |
Hi @apU823 - that's great, looks like it is working! Yes, I included Would be very helpful if you could post the full output from both the What I really need to see are example "power" values from when your solar was producing something. Thanks! |
tesla-history.py - login - debug.txt Please see attached. a couple questions -
|
Great, thanks. I need some time to review in detail to see what is going on here. At first glance, there seems to be some issues though.
In reference to 2. above. There seems to be an issue with timezones that is likely causing this. At the moment it is not getting a full day like it should. Your output is showing The script does not look at the timezone when retrieving the historical data and assumes it would be the same as the site config. This is likely causing the issue with getting a full day of data. I'll need to make some further changes to the script to hopefully address this. |
Hi @apU823 - I have updated the script - once this has been merged, please pull the latest changes and re-test the same commands as before and post the output. 🤞 |
@mcbirse Running: python3 tesla-history.py -t -d --start "2016-11-01 00:00:00" ---end "2023-02-27 23:59:59" returns "ERROR: Failed to retrieve SITE_CONFIG - 'site_name'" I tried adding "--site "################" with the site ID that is listed for my solar and I still get the same response. Running this in debug mode I can see that it is authenticating ok and is getting the data for two sites. First it has a long section for Get SITE_CONFIG for Site ID {my powerwall site ID}. Then it has a section starting with "Get SITE_DATA for Site ID {my powerwall site ID}". Then it shows a section starting with "Get SITE_CONFIG for Site ID {my solar site ID}". After all that config data it shows the error. I also tried running it with the --site specifying my Powerwall site ID and get the same error. I have successfully used an older version of the history tool in the past to fill in gaps in my data. I just ran that older version and it works for dates after my Powerwall was added, and doesn't return any data but does not throw any errors for dates before my Powerwall was added. |
Tracing the code as best I can, it looks like there is no SITE_NAME attribute for my solar-only site which causes the error here:
Here is the SITE_CONFIG with personal data removed:
|
So, after commenting out every reference to sitename and sitetimezone, since neither exist in my solar-only response data, I got it to list both sites.
I then ran it with the solar site ID and it started throwing more errors, so I kept commenting out anything that was erroring on the missing site name and site timezone, and finally added "sitetz = influxtz" around line 897 just so there would be some value for sitetz, and seems to be working. I don't know how the timezones are lining up yet. I'm going to try to pull some old solar data and see if I can compare with the Tesla app. |
I don't need to compare to the Tesla app, I can use the time of production vs the sun/moon graph to see if the data make sense. I pulled 01 Jan 2022. Judging from the sun/moon curve and the solar curve, I think the timezones are lined up correctly: I can't tell if it's matched up during DST. This is 01 Jul 2021. It looks like production starts after sunrise and ends after sunset, but I'm not sure: Edit: Sometimes I'm a little slow... I compared to graphs from July 2022 and they are consistently centered on the sun/moon curve, so I think my hacked version is causing the data to be off by one hour during DST. One issue: It seems the script (with my hacking) is duplicating the solar data in the home usage data, which is why my plots are green instead of yellow in the screenshots. Is that intentional? It seems the home usage should be null when pulling the solar-only site data. I hacked at it a little more and have it only writing the solar value to the database. |
F###. Just deleted my entire http data while trying to delete one day of the imported data... and I think my backup wasn't working correctly, so I can't restore the database. Looks like I'll need to use the import tool to restore my entire last year of Powerwall data. Have I mentioned that I HATE influxdb? |
@youzer-name - thank you! This is really helpful information. 🙏 It is so difficult to make changes to the script without being able to test against your own system. And clearly, there are many variables with the data that the Tesla cloud can respond with or not include, that we need to try to account for. Sorry to hear your data got deleted! Was this an issue with the tesla-history Are you able to give some debug output that includes the power history data please? i.e. similar to what @apU823 provided below. I would really like to see if your data also includes the "installation_time_zone" field just before the "time_series" data. * Loading daily history: [2023-03-04]
{
"serial_number": "************************",
"installation_time_zone": "UTC",
"time_series": [
{
"timestamp": "2023-03-05T00:00:00Z",
"solar_power": 0,
"battery_power": 0,
"grid_power": 0,
"grid_services_power": 0,
"generator_power": 0
}, I think your timezone data lined up mostly okay (although maybe not the DST changes), as I notice in your output for the site list both the "Installed" time and "System time" are the same timezone, so manually setting the timezone to influxtz would probably be fine.
However, with the site list from @apU823 I noticed his "Installed" time is in UTC, but "System time" is in his timezone. I think this is why the power history timeseries data is returned in UTC - as his system may have been set to UTC initially I guess, and the Tesla cloud continues to store/return that data in UTC, despite the timezone being corrected later?
InfluxDB - also not much of a fan. Given the info you have provided already, I can see some further improvements I can make to the script and will work on those. Thank you. |
I'm glad my sacrifice helped the team 😦
Not the script, but I didn't think I "typo'd" either. I think maybe I just issued a command that wasn't formatted correctly and Influx just ran with it. I'm still suffering from PTSD (post-traumatic-stupid-database_user), but I think what I did first was I tried to delete the data I imported for 01 Jul 2021 by "delete from autogen.http where time < '2021-07-02'. InfluxDB responded with something about not being able to delete from specific retention policies... so I dropped the 'autogen' part. I'm pretty sure that "delete from http where time < '2021-07-02' wiped my entire http measurement. If I had typo'd the < into a >, I would still have had the data from 01 July, but everything was gone. So I think it just ignored the date. (I was running from "influx -precision rfc3339", so the date format should have been good).
I've attached one day's data pulled with my modified version of the script. Mine has a "time_zone_offset" just before the time_series.
The Powerwall gateway data that I just recreated using the old version of the tool seems to have everything lined up correctly for both DST and non-DST.
The original script was a lifesaver for me today. Lesson learned about verifying backups. My backup cron job was running every night, but not doing anything useful. Part of the script needed sudo rights, but not the part that made the archive and wrote it to my network folder, so I saw a new file showing up every day as expected and thought it was working. Unfortunately the part of the script where Influx was supposed to be taking the new snapshot wasn't working, so I've been backing up the database as of a day in March 2022 every night since then! It just kept backing up the last snapshot that I successfully made when originally setting the backup up. I had to move the job to the sudo cron (sudo crontab -e) to get it to work. I haven't done an actual restore, so I suppose I'm still not being smart, but I opened the archive on the other end and checked the file dates to make sure it's working now. I'm tempted to go full rogue and push everything to MySQL and reconfigure all my dashboards to use that. At least I know how not to hit the big red 'kill all my data' button in MySQL... (I think) |
@mcbirse - You've saved me from needing to roll my own solution to get data from the SolarEdge API! Running this daemon should allow me to get the data I need to (approximately) calculate the costs/savings from my two solar arrays. I have a few questions about setting it up in my environment. Short recap: I have a Solar City/Tesla array that I pay for under a PPA and two Powerwall 2's. I just added a second solar array (purchased) that will have a SolarEdge inverter once the installers come back with a working unit to replace the one that died as soon as they turned it on. The Powerwall Gateway will see the total production or both arrays, so I need some way to get either the SolarEdge or the Tesla production numbers separate from the PW Gateway so I can do the math to figure out how much solar was generated by each array. My Powerwall Dashboard setup has been customized a lot, so I'm not using the setup.sh script or git pull. I download and integrate any updates that I want manually. Scrolling through the topic, it looks like if I just change the database name in the config file from 'powerwall' to 'solar' (or something similar), the daemon will be able to write the data to InfluxDB without any conflict with the Powerwall Dashboard data. Do I have that right? So could I:
If I'm following along correctly, that will create a container in the Powerwall Dashboard stack that will pull the solar-only data and write it to whatever database I specified in the tesla-history.conf. Any other changes I'd need to make? ** footnote: Since the Powerwall Gateway always shows higher production numbers than my Solar City inverter, the best I can get is approximate numbers. When looking at monthly totals, the Powerwall Gateway shows about 4% more solar production than I'm being billed for via the Solar City inverter data. I assume the difference is down to one being a revenue-grade meter and the other not. |
Upgrade went off without a hitch for me and the tesla-history container is working great. Thank you again for this! |
@mcbirse - So I gave it a go and it looks like I was able to get it running the way I wanted. I did encounter one issue related the the DB HOST configuration that may be a bug. I created a 'solaronly' database in InfluxDB and I copied my .conf and .auth files to ./tesla-history. I updated the .conf to add my site ID and change the database name to 'solaronly'. I added the tesla-history section to my powerwall.yml and did a down/up of the stack. Initially I got an error on the database connection. When I was running this before (not in the docker container) I was able to use "192.168.0.12" for my HOST and 8085 as my port for InfluxDB. I used to have two instances of InfluxDB running, so this one is named "pw-influxdb" and exposes port 8085 while using port 8086 internally. From outside the container it is at 192.168.0.12:8085 and form inside the stack it is at pw-influxdb:8086. When I first tried to run this using the IP and external port I got this error:
I'm not sure where it was getting 'influxdb' as the DB host. The conf file said:
It picked up the port number, but I'm not sure why it ignored the IP address. So I switched it to use the internal reference to the database:
and that cleared up the connection error. Is that a bug in the way the conf file is read? I have data sources in Grafana set up both ways and it is able to connect to the database whether using the internal or external address and port. The final thing I had to do to set this up manually was to create the retention policies. I went ahead and duplicated all the retention policies from the main database, but I expect that several of those (pod, alerts, etc) weren't needed. I'm seeing data flowing into the solaronly database, in the http measurement, and in the autogen, kwh, daily, and monthly retention policies. |
@youzer-name & others - sorry, have been busy this week and have not had time to get back to several posts here. Just quickly - check your powerwall.yml config, as the InfluxDB hostname can be set here via an environment variable for when the tesla-history is running in a docker container, or daemon mode (IHOST env variable). I did this on purpose, as the hostname when running in a docker container could be different to when trying to run the tesla-history script outside the container, for instance to extract history data. So if the powerwall.yml config defines the InfluxDB IHOST env variable, this will override the hostname in the tesla-history config file when running in daemon mode. I hope that makes sense! |
@mcbirse - I missed that host reference in the powerwall.yml and I just updated it to be 'pw-influxdb' to match my environment. Does it make sense that it used the IHOST from powerwall.yml when I entered an IP address in tesla-history.conf, but it used the HOST from tesla-history.conf when I changed that line of the conf file to use the container name? |
@youzer-name - I'll try to explain the configuration scenarios and what I intended with this, and hopefully it makes more sense. If it is not working per below there could be bug, as I have not tested all scenarios. The tesla-history script was originally intended to be run from the command only to pull in historical data (so, not as a daemon or in a docker container). Adding a daemon mode option and ability to run in a docker container complicates the set up a little bit. I wanted it to have the flexibility to be able to run in a docker container, or still be run outside the container to pull in historical data as well, or even in daemon mode manually from the command line or as some other system service.... however, still use the same common/shared config for all cases. When running in a docker container, the InfluxDB hostname and port could be different to if you were to run the script from outside the container (even though you are writing to the same InfluxDB instance). So, I added an option to be able to define the InfluxDB hostname via an environment variable for daemon mode, i.e. for when running in a docker container. If the environment variable is defined, this will override the hostname defined in the tesla-history.conf file. This also means however that the hostname defined in the tesla-history.conf file can be set to the host/ip required for running the script from outside the container (e.g. to pull history data) which gives some flexibility. Example per your setup:
NOTE: I've realised from your setup I should allow the PORT to be defined by environment variable in the docker compose configuration as well. I will update the script to support this. For your example setup, you might have a docker compose config like below. The InfluxDB internal hostname is called "pw-influxdb" with internal target port of 8086, and the externally published IP/port is 192.168.0.12:8085. Given that, in the same file it is easy to know/see what should be defined for the tesla-history docker container by looking at your pw-influxdb container configuration. So for the tesla-history container to connect to the internal InfluxDB instance, you would define the environment variables as "IHOST=pw-influxdb" and "IPORT=8086" services:
pw-influxdb:
image: influxdb:1.8
container_name: pw-influxdb
hostname: pw-influxdb
restart: always
volumes:
- type: bind
source: ./influxdb.conf
target: /etc/influxdb/influxdb.conf
read_only: true
- type: bind
source: ./influxdb
target: /var/lib/influxdb
ports:
- target: 8086
published: 8085
mode: host
tesla-history:
image: jasonacox/tesla-history:0.1.0
container_name: tesla-history
hostname: tesla-history
restart: always
volumes:
- type: bind
source: ./tesla-history
target: /var/lib/tesla-history
environment:
- IHOST=pw-influxdb
- IPORT=8086
- TCONF=/var/lib/tesla-history/tesla-history.conf
- TAUTH=/var/lib/tesla-history/tesla-history.auth
depends_on:
- pw-influxdb Then, in the tesla-history.conf file, you can configure the HOST/PORT to be the external hostname/port that is used outside the container, i.e. "HOST = 192.168.0.12" and "PORT = 8085" [Tesla]
# Tesla Account e-mail address and Auth token file
USER = yourname@example.com
AUTH = tesla-history.auth
[InfluxDB]
# InfluxDB server settings
HOST = 192.168.0.12
PORT = 8085
# Auth (leave blank if not used)
USER =
PASS =
# Database name and timezone
DB = solaronly
TZ = America/New_York
[daemon]
; Config options when running as a daemon (i.e. docker container)
# Minutes to wait between poll requests
WAIT = 5
# Minutes of history to retrieve for each poll request
HIST = 60
# Enable log output for each poll request
LOG = no
# Enable debug output (print raw responses from Tesla cloud)
DEBUG = no
# Enable test mode (disable writing to InfluxDB)
TEST = no
# If multiple Tesla Energy sites exist, uncomment below and enter Site ID
SITE = 123456789 This means you can still run the tesla-history script at any time from outside the container and pull history data without having to have a different config file, and the tesla-history docker container still runs fine as the hostname/port is defined by the environment variables in the docker compose configuration. I hope that makes sense or if you see any issues with this or something is not working, please let me know. I will update the script to also allow the PORT to be defined by environment variable as well, which would be required for your setup. |
@youzer-name - In the latest update defining the PORT is now supported. Please note the variable names used for the tesla-history docker compose config have been changed to be more descriptive, i.e. environment:
- INFLUX_HOST=influxdb
- INFLUX_PORT=8086
- TESLA_CONF=/var/lib/tesla-history/tesla-history.conf
- TESLA_AUTH=/var/lib/tesla-history/tesla-history.auth |
@mcbirse That all makes sense, and I'm glad my setup was weird enough to shed light on the potential need for the port config option. 😄 On the off chance that anyone is interested in what the data from the solar-only history looks like compared to the data from a Powerwall Gateway, this is what I'm seeing: "Solar City' is coming from the history API and "Solar Energy" is coming from the Gateway. As expected, the history data smooths out the peaks and valleys due to the lower sampling rate. This shows all of yesterday, but during the day the history data also lags slightly at the right edge of the graph due to the 5 minute update interval. As of (hopefully) Tuesday, when I get the inverter replaced on my new array, the line for "Solar Energy" will be quite a bit higher as it will be the total solar output seen by the Powerwall Gateway from both arrays. It looks like the daily totals from the tesla-history api exactly match what I get in the Tesla app, and those numbers match what I get billed for under my PPA, so I'm going to adjust my cost calculations to use the solar-only history generation numbers. That should make them dead-on accurate going forward, whereas in the past they had always been a bit off due to the higher generation numbers coming from the Gateway. Thanks again for all the time and effort you (and everyone else) has put into these tools. |
Does anyone have an example "solar-only" dashboard screenshot you would be willing for us to post on the https://github.com/jasonacox/Powerwall-Dashboard/tree/main/tools/solar-only#tesla-solar-only page as an example? |
Thanks @Jot18 ! It looks great! |
Can you share your dashboard file?
…On Sun, Jul 9, 2023 at 7:44 PM Jot18 ***@***.***> wrote:
[image: Screenshot (2)]
<https://user-images.githubusercontent.com/20891340/252177682-3f954359-e851-462e-ba20-e1ad90db5bd7.png>
—
Reply to this email directly, view it on GitHub
<#183 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEV7MJ3FUK332DGQB24MSO3XPM64XANCNFSM6AAAAAAVBFL554>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@apU823 |
The dashboard should also be in the solar-only folder: https://github.com/jasonacox/Powerwall-Dashboard/blob/main/tools/solar-only/dashboard.json |
I have solar only and a new Tesla inverter that seems to have a different management system than older inverters. When I go to the inverter IP address on my local network, it quickly redirects from the login page to an /upgrade page that prompts to install the Tesla Pros app. Using the Tesla Pros app I am readily able to log in as a customer, connect to the TEG network, and view/set all inverter data and settings (usage/generation, MPPT string data, CT settings, wifi settings, etc.). I'm hoping to be able to access this same data directly from the wifi network. I'm able to stop the web redirect and get the login page. And I'm similarly able to POST using curl to /api/login/Basic, but in both cases I get "bad credentials" errors. I believe the issue is with the password not being correct, but I've no idea what this might be - my Tesla account password, the password I used when I set up the gateway, and the last 5 of the inverter serial do not work. I think that if I can figure out authentication (as is readily done in Tesla Pros), then we'd be able to add back a lot of the inverter control and data to the dashboard for solar-only customers. Any ideas? |
Do you have a model number for the new inverter?
I wonder if they simply forgot to turn off Tesla Pro access?
…On Fri, Jul 21, 2023 at 9:46 PM jared-w-smith ***@***.***> wrote:
I have solar only and a new Tesla inverter that seems to have a different
management system than older inverters. When I go to the inverter IP
address on my local network, it quickly redirects from the login page to an
/upgrade page that prompts to install the Tesla Pros app. Using the Tesla
Pros app I am readily able to log in as a customer, connect to the TEG
network, and view/set all inverter data and settings (usage/generation,
MPPT string data, CT settings, wifi settings, etc.).
I'm hoping to be able to access this same data directly from the wifi
network. I'm able to stop the web redirect and get the login page. And I'm
similarly able to POST using curl to /api/login/Basic, but in both cases I
get "bad credentials" errors. I believe the issue is with the password not
being correct, but I've no idea what this might be - my Tesla account
password, the password I used when I set up the gateway, and the last 5 of
the inverter serial do not work. I think that if I can figure out
authentication (as is readily done in Tesla Pros), then we'd be able to add
back a lot of the inverter control and data to the dashboard for solar-only
customers.
Any ideas?
—
Reply to this email directly, view it on GitHub
<#183 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEV7MJZJJKWQXUROPIDJHSDXRMWJBANCNFSM6AAAAAAVBFL554>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I made some progress. It took some experimenting, but I found the default password is the last 5 values of the TEG WIFI password (not the serial number). So, I was then able to set my "customer" password at /password. I can now see the web-based monitoring system in my browser. Now that I have my customer password set, I'm able view my access token in DevTools. I can also generate a token by authenticating using: This returns a token value (such as): With the returned token I can access most of the Powerwall API endpoints. For example: To answer your questions, the hardware shows model 1535843-00-D. The /api/solars endpoint shows model of "PVI-45". |
Just curious can you share some screenshots of what the web gui looks
like? With the real time data
Might make for an interesting post over on r/teslasolar
…On Fri, Jul 21, 2023 at 11:52 PM jared-w-smith ***@***.***> wrote:
I made some progress. It took some experimenting, but I found the default
password is the last 5 values of the TEG WIFI password (not the serial
number). So, I was then able to set my "customer" password at /password. I
can now see the web-based monitoring system in my browser.
Now that I have my customer password set, I'm able view my access token in
DevTools. I can also generate a token by authenticating using:
curl -k -i -X POST https://192.168.0.5/api/login/Basic -H "Content-Type:
application/json" -d "{\"username\": \"customer\",\"password\":
\"PASSWORD\"}"
This returns a token value (such as):
{"email":"","firstname":"Tesla","lastname":"Energy","roles":["Home_Owner"],"token":"avUuKAQsnRiaOVygfqDaWjRCqUEUDJ7NDHu-Cfl9eE-ld-tQILYtjvt0T9C-AdGMNkAEKaYAgo0ALivFINoIhQ==","provider":"Basic","loginTime":"2023-07-21T21:43:06.687730825-06:00"}
With the returned token I can access most of the Powerwall API endpoints
<https://github.com/vloschiavo/powerwall2>. For example:
curl -k -i --header "Authorization: Bearer
avUuKAQsnRiaOVygfqDaWjRCqUEUDJ7NDHu-Cfl9eE-ld-tQILYtjvt0T9C-AdGMNkAEKaYAgo0ALivFINoIhQ=="
https://192.168.0.5/api/meters/aggregates
returns the real-time site, load, and solar generation values.
To answer your questions, the hardware shows model 1535843-00-D. The
/api/solars endpoint shows model of "PVI-45".
—
Reply to this email directly, view it on GitHub
<#183 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AEV7MJZPNGEW77LKBNJBURDXRNFBRANCNFSM6AAAAAAVBFL554>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
GREAT thread as I have exactly this situation as I am also solar-only ... my inverter is 1538000-45 and installed software is 23.12.3 I can confirm what @jared-w-smith wrote above. LOL that one has to throttle the browser to 3G to get past the redirect from the login screen. I'm VERY disappointed (with Tesla!) to hear that you can't pull the String Data (since that is what I'm interested in also) ... and I too was unsuccessful after a little bit of poking around. It's gotta be there somewhere ... since I too can see it in the Tesla Pros app. |
Hi @jasonacox - referencing your comment above from way back in March (has it been that long!?)... I have also always had in mind that eventually we should de-dupe the solar-only offshoot and merge this into the main Powerwall Dashboard stack as a setup option. I believe the beta testing is complete and successful (thank you to everyone here for their help, testing & feedback!). As such, I've been working on this to come up with a nice solution, that should be quite extensible for future changes as well, and enables a more modular approach to how we can handle different docker container requirements depending on the user's setup (i.e. for now, Powerwall vs. Solar Only). Is it okay if I create a new branch that I can start to commit changes to? I'd rather commit here directly than to my own fork. Is a branch and new version Here's details of what I have planned and have tested so far. Use profiles with Docker ComposeDocker Compose supports assigning profiles to services. The profiles to use can be specified with the The different container requirements for a Powerwall vs. Solar Only setup can be handled easily this way. The below would be added to the
Setup script changesThe
The same would be done with the Docker Compose
|
I love this, @mcbirse ! I think it would be appropriate to bump this to v3.0.0. I could be convinced to go with v2.11.0, but it feels like a more significant update. Feel free to create a new branch. |
Thanks @jasonacox - will do, and the changes are definitely v3 worthy! I just need to reconcile and review some of my work on a couple of test systems, and then start committing those to the branch. Once I think it is ready for testing, then I will create a PR to main, at which point should be ready for testing and review etc. The tesla-history script has a small update, and will need a push to Docker Hub again (small bugfix and added There are more changes than just merging the solar-only offshoot as well. As I noticed any issues during testing I decided to try to address them. Includes trying to address some common problems we have faced, e.g. permissions issues, by adding some more checks/options in the Happy for review/testing/accept/reject/revise etc. Honestly the changes I've been working on kind of just snowballed into a lot more than I intended originally. |
Nice! You always spot good improvements. Looking forward to this... 👍 |
Hi @jasonacox - I believe I have committed the majority of my changes to the v3.0.0 branch now... just pending some release notes (which could be extensive depending on how much detail we want), and then will create a PR for test and review. When you have a chance, would you be able to run the |
Done! https://hub.docker.com/r/jasonacox/tesla-history/tags With this being a major rev, I suggest we keep the release notes high level (the commits will show the deltas for anyone wanting to know the details). |
There has been a lot of interest by Tesla Solar owners who don't have a Powerwall, to get a similar dashboard for their system. I'm opening this as a feature request to explore creating a derivative dashboard using a similar stack plus the tesla-history script developed by @mcbirse as a potential service to feed a solar-only dashboard.
Reference: Reddit Thread
Proposal
Add a Tesla-Solar-Only dashboard option to in the tools folder. Basically, a InfluxDB+Grafana+Import-Tool (optional +Weather411) stack. The Import-Tool could be converted to be a python service, for instance, that polls the Tesla API every 5m or so and stores it in InfluxDB. We could use the dashboard.json and remove the Powerwall specific panels.
We need someone who has a Solar-Only setup to validate.
The text was updated successfully, but these errors were encountered: