-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any way to import all historical data? #12
Comments
Hi @mikkyo9 - good question. The Powerwall doesn't keep that history itself. When you install the "Powerwall-Dashboard" stack, the Telegraf service starts polling the Powerwall to get the metrics. The data is then stored in the DB for Grafana to display. The Tesla App has this historical data because it (Tesla Cloud) started polling the Powerwall when it was commissioned. I'm not aware of a way to pull this historical data into the database and even if we could, it would be missing some data like the String and Temperature data. |
You could probably manually download each day's data and then write a script to transfer that information into the influx DB. I wish I could do an aggregate data download with fine granularity. Best I've found is to do a daily download in the phone app, then upload that to my dropbox and then view the info on my desktop. |
@jasonacox - Your abilities greatly outpace mine, so I'm bringing an idea without specific testing. On https://www.teslaapi.io/powerwalls/state-and-settings and https://www.teslaapi.io/energy-sites/state-and-settings there appears to be an API call for historical data using a query period of day, week, month, year (similar to the granularity offered in the app). My thoughts are, could one develop a script with a start date, and then increment by day to download historical data? |
Thanks @dmayo305 - I have been looking at those Tesla cloud APIs (used by the Tesla app) to add features to pyPowerwall (e.g. update battery reserve setting). However, it seems Tesla keeps making breaking changes so I haven't found a good example/library to use. TeslaPy seems to be close but I haven't been able to get it to work yet. If we can figure it out, it wouldn't be too difficult to do what you mention. |
I have a sample project that uses TeslaPy to download Powerwall and Solar site data as json files. I imagine it could be an example of how to get the data for adding to the Dashboard, but I don't know if it would need to be reformatted before loading in to the database. This example uses the "refresh token" method for logging in, so you'll need to generate that first with a 3rd party app or one of the other examples floating around. |
https://github.com/fracai/tesla-history The sample project in question |
Thanks @fracai ! Nice work. Yes I think we could use that to port the data into InfluxDB. What method did you use to get the "refresh token"? If we extend your project to drop that data into the Dashboard DB, we could include instruction on how to make it work. Ideally, I want to find an automated way to get the token so I can include that in the pyPowerwall library. |
I used "Auth for Tesla", an iOS app. There are a few apps listed on this TeslaMate FAQ. I think an automated method is ideal, but I've gotten the impression that manually retrieving the refresh token is the best method as Tesla frequently requires entering a captcha in order to login. Fortunately, I think the refresh token is valid for a very long time and, unless you need to fill in data gaps, it shouldn't be necessary to request historical data. It might not be ideal, but I think it'd be acceptable to use a manual process to fill in old data. Then again, if the refresh token is stored, it'd be possible to make a daily request of historical data to regularly fill in any missing gaps. |
There's also this python script for retrieving the token, though it does require manual intervention. |
Any more work being done for historical data? It'd be really cool to pull in all the old data. |
I agree. It's not an easy fix, unfortunatelyl. I'm doing more investigation and open to any help. ;) |
What I am most interested in is my NEM year data(which started a month ago). So even if I could just import the basic data(to grid, from grid, home usage) as a single point in history it would be useful to fill in the gap from when I started to use powerwall-dashboard. |
I have some good news regarding this! 😀 Thanks to @jasonacox for pointing out the TeslaPy module and also @fracai for the sample project using that. I started testing the module and have worked out how to import historical data via the Tesla Owner API (cloud) directly into InfluxDB. 🎉 Having >1yr of historical data that I would certainly love to see in my dashboard was a good motivator. 😊 I'm still working on the program but hopefully will be ready to release in the next week or few. Full time day job doesn't help 😞 - also I've had to brush up on my python knowledge, as these days I'm primarily coding in a language probably most people have never heard of! So, I have developed this as a command line tool that accepts a start & end datetime range to retrieve history from the Tesla Owner API and import into InfluxDB. @jasonacox I'm thinking submitting a PR to add it to the tools directory seems appropriate? I'm using the staged authorization process of TeslaPy as it seemed convenient and easy in my opinion. The login to your Tesla account is only required once, as after login a token is cached/saved and it appears to never expire (similar to the Tesla app which stays logged in). Obviously not all data that the Powerwall-Dashboard logs is available for import. But, the main data shown in the "Energy Usage" and "Power Meters" panels is. I think I can get the "Monthly Analysis" data working too (still testing). This is stored in the kwh retention policy, but, it is populated from a CQ based on the data in autogen. So I just need to run a query to refresh it after importing history data. Also, I've been able to extrapolate the grid status from the backup event history, which is nice. 😃 In case there are problems with an import, obviously backups are highly recommended. But, the program will also tag all the data points with a "source=cloud" tag in InfluxDB. This makes it easy to reverse what was imported with a query like The history data available from the Tesla cloud is in 5 minute intervals only (15 min for charge percentage), but it seems good enough. Here are some example comparisons. Below is showing data logged from Powerwall-Dashboard as normal (i.e. every 5 seconds or so): Compared with below where data was imported from the Tesla cloud - the graph and values (e.g. Power Meters) are pretty close: Example showing imported history data with a backup event - the grid status value is extrapolated from backup event history: If you have a data gap / missing data your graph might look like below: This can fixed now by importing the history from Tesla cloud for the specific time period: So it seems to be working well so far as I can tell, and useful for importing old data from before you installed Powerwall-Dashboard, as well as if your system went down for some reason! While I'm still working on this, if anyone has any feature requests, questions, tips etc. please send them my way. 😊 (side note: I'm already thinking about fetching historical weather data as well now, which is available from the OpenWeather One Call API 3.0 "timemachine" API call 😉 ) |
Hi @mcbirse Great work here!!! Please feel free to submit a PR in the /tools directory. I'm sure others would love this too. |
I'm looking forward to testing this. I have several small data dropouts as well as several months of data before I got the dashboard setup. |
Same here. Take your time to get it where you're happy, but I'm really looking forward to getting a look as well. |
Update finally - I have been working on the script to import historical data from Tesla cloud over the past few weeks, and believe it is now good to go! 🤞 I will submit a PR to add it to the /tools directory shortly. Hoping for some feedback from the community on this, as so far all testing has been with my system only. Please see the below instructions for how to use the script (will also add this to the tools README). The tool can be used to import historical data from Tesla cloud, for instance from before you started using Powerwall-Dashboard. It can also be used to fill in missing data (data gaps) for periods where Powerwall-Dashboard stopped working (e.g. system down, or lost communication with the Local Gateway). By default, the script will search InfluxDB for data gaps and fill gaps only, so you do not need to identify time periods where you have missing data, as this will be done automatically. NOTE:
To use the script:
Setup and logging in to Tesla accountOn first use, it is recommended to use the # Login to Tesla account
python3 tesla-history.py --login It will run in an interactive mode. The example below shows the config creation:
In most cases, the Generally, only your Tesla account After the config is saved, you will be prompted to login to your Tesla account. This is done by opening the displayed URL in your browser and then logging in:
After you have logged in successfully, the browser will show a 'Page Not Found' webpage. Copy the URL of this page and paste it at the prompt. Once logged in successfully, you will be shown details of the energy site(s) associated with your account:
Once these steps are completed, you should not have to login again. Basic script usage and examplesTo import history data from Tesla cloud for a given start/end period, use the # Get history data for start/end period
python3 tesla-history.py --start "2022-10-01 00:00:00" --end "2022-10-05 23:59:59" The above example would retrieve history data for the first 5 full days of October. You can run in test mode first which will not import any data, by using the # Run in test mode with --test (will not import data)
python3 tesla-history.py --start "2022-10-01 00:00:00" --end "2022-10-05 23:59:59" --test Example output:
If backup events are identified, this will be shown in the output, and the imported grid status data will include the outages:
Since grid status logging was only added to Powerwall-Dashboard in June, you can use the tool to import grid status history from before this time, without affecting your existing power usage data. The search for missing power usage / grid status data is done independently, so power usage history retrieval will be skipped and the missing grid status data will still be retrieved from Tesla cloud and imported:
Some convenience date options are also available (e.g. these could be used via cron): # Convenience date options (both options can be used in a single command if desired)
python3 tesla-history.py --today
python3 tesla-history.py --yesterday Finally, if something goes wrong, the imported data can be removed from InfluxDB with the # Remove imported data
python3 tesla-history.py --start "YYYY-MM-DD hh:mm:ss" --end "YYYY-MM-DD hh:mm:ss" --remove For more usage options, run without arguments or with the # Show usage options
python3 tesla-history.py --help
Advanced option notes
InfluxDB notes and Analysis DataI am recording some technical notes below regarding InfluxDB and the Analysis data updates for anyone interested - although mainly because I'm likely to forget why I have done something a certain way in the program if I need to look at this again in the future. 😊 Importing data:
For updating analysis data of InfluxDB (kwh, daily, and monthly retention policies):
Some final notes for anyone using this tool to potentially import a lot of historical data: If you have a lot of historical data you would like to import, it is probably advisable to split this up into batches of no more than few months at a time. During my testing, when retrieving the daily history data, I found the Tesla cloud site may stop responding if you overload it with too many requests in short periods (this is noted in the TeslaPy module However, this did only occur when trying to request data as quickly as possible. As such, a default 1 second delay between requests has been added to the script (can be changed by adding Please let us know if you find any issues or have questions. |
This looks fantastic. I haven't had a chance to test it yet, but I want to soon. I love that you have it designed to search for and fill gaps. That's great. |
AMAZING!!! I imported over six months of data and filled a ton of small data gaps. No issues so far. THANKS!!! |
Cool! I can't wait to try it out. I assume your work is based on the most current release schema, so I should upgrade first? |
@mikkyo9 - You will need to be running at least version 2.3.0 I believe, as that was the release when the CQ for grid status was added (released July). I'd still recommend upgrading to the latest version if you can though, as there has been many fixes.
@jgleigh - Awesome to hear, thanks for the feedback! Yes, you may see a number of small data gaps identified even if you think your system had been logging non-stop without issues. This is more likely with the grid status data as the gap threshold is 1 minute for that, as apposed to 5 minutes for power usage. Any time you do an upgrade, for instance, you probably lose a few minutes of data but generally not more than 5 mins in my experience. I considered accepting longer gaps for grid data - but then looking at my own backup history, I've had a number of events that range from a few seconds to a few minutes... so this way these will still be filled in if the system did not log data during that time for whatever reason. |
Great job on this! |
Works like a charm. Thanks for putting this together @mcbirse |
Worked quite well for me also, thanks for the all your efforts! |
This worked like a charm. It's amazing what a community can do. I have all the date from system install August 2021 thanks @mcbirse and @jasonacox |
GREAT writeup! Thanks @clukas1967 !
I wonder if we should update the README.md instructions for Dashboard installation on Synology systems (here). Would you want to submit a PR for that so you can get credit? One change:
That will break the upgrade.sh process. Instead, you should be able to edit compose.env to do the same thing:
Thanks!! 🙏 |
Thanks @jasonacox. I should have also acknowledged you in my original post as we wouldn't have any of this without you! I agree it makes sense to make the readme more explicit for PW Dashboard install. But I don't deserve any credit since I merely restated the procedure in the link I cited in more compact form. Regarding editing compose.env vs. powerwall.yml I would defer to your expertise here. I do think it makes sense to have a Synology section in a readme specifically for the Tesla History utility. You could just take the last CLI block from my post and append the admonition to follow it up with the original procedure offered by @mcbirse on 10/13/2022. Probably that whole lengthy post should be the readme. In a few days I will be starting a couple of new threads in here as I'm about to expand both my battery storage and my PV generation on the roof and have questions/suggestions about how that will work. |
Thanks @clukas1967 - great to hear the script has been so useful! Next iteration will support solar-only sites (as per version under tools/solar-only). This is already working now for those solar-only users and at some point I will merge all these changes to the primary script so we are not maintaining multiple versions... (still some QoL updates to do for solar-only users, such as a daemon/background mode). Excellent info for the Synology NAS users out there, of which there appears to be many! Personally I'm running Powerwall-Dashboard in an Alpine Linux VM on a QNAP NAS (since I had that set up for other purposes already). The versatility of Powerwall-Dashboard and number of platforms it can be run on is fantastic.
You could - but I find it useful to have the script on hand so it can be easily run at any time. I have frequently used it myself for instances where my host server was down (e.g. due to OS updates, NAS firmware updates, etc.), so you can then fill in missing data from the Tesla cloud at least, whenever you need to. |
THANKS! Just had the same issue and this worked. |
Not sure if it's due to a change on Tesla's end but haven't been able to get past the Tesla Account setup steps in the instructions. After copying the URL, loging into the Tesla Account and getting "Page Not Found" the URL that I get results in the following error: Login failure - (mismatching_state) CSRF Warning! State not equal in request and response. Is there another way to add the authentication token to the script? |
Hi @wiltocking - A few things:
|
Well, new day new results, cleared cookies again and this time the URL worked! Downloading Tesla cloud history now. Thank you :) |
Awesome! Thanks @wiltocking ! |
Im not able to run the script getting the error |
I should also mention that my dashboard is up and running |
Hi @kylebrowning - I recommend following these steps: https://github.com/jasonacox/Powerwall-Dashboard/tree/main/tools/tesla-history#setup-and-logging-in-to-tesla-account |
I thought I'd try this; I'm away from home at the mo so doing it remotely and I get this error when I try to login. I have deleted the config file an started from the beginning and get the same error ERROR: Failed to retrieve PRODUCT_LIST - LoginRequired('(login_required) Login required') |
Interesting. Did you get any error from Tesla when you tried the login, or just the expected 404 page URL? |
Hi All, First, huge thanks to @jasonacox for the awesome dashboard, @mcbirse for the super useful history tool and @clukas1967 for the life saving tips on getting everything running on a Synology NAS! Now the issue: I am in Sydney/Australia, my local system timezone is Sydney/Australia, as is my InfluxDB and Site timezone, but for some reason that I cannot work out, when I run tesla-history, datetime values are output as Sydney/Australia time values, but with UTC offset (ie. +00:00) instead of Sydney/Australia offset (+11:00 or +10:00, depending on Daylight savings). As a consequence, the calls to get_calendar_history_data only return 10 or 11 hours of data and I end up with 13 or 14 hour gaps every day. I have worked around the issue with a filthy hack (I changed day.replace(tzinfo=sitetz).isoformat() to f"{day.isoformat()}+11:00", noting the only gaps I had were during daylight savings), but I would like to work out what the issue is and fix it so that I can use the tool without the hack. Any ideas? |
Hi @porlcass - I take it you are running the tesla-history script on a Synology NAS? There seems to be some issue with the python dateutil module obtaining the timezone correctly on Synology NAS systems. Refer issue #455 and the last post here. I'm not sure of the exact cause, but it's possible the timezone files on Synology NAS are messed up in some way or incompatible with python dateutil. If you are on Synology (or even another system), it would be great if you could run some of the python tests mentioned to see where the issue is occurring. A subset of those, like below, is probably sufficient in fact. Run python and paste all of the below commands in the python shell, and post the results. from dateutil.parser import isoparse
from dateutil import tz
start = isoparse("2023-06-26 00:00:00")
start
print(start)
type(start)
type(start.utcoffset())
influxtz = tz.gettz("Australia/Sydney")
influxtz
print(influxtz)
type(influxtz)
start = start.replace(tzinfo=influxtz)
start
print(start)
type(start)
type(start.utcoffset()) It would be interesting to see what is returned for the final
The above (from a Synology NAS) is showing the timestamp is output with a +00:00 offset (UTC), which is incorrect based on the timezone specified. It should show +10:00 for Australia/Sydney for the above example. Is your system showing a similar issue? Can any other Synology NAS users shed some light on this issue? Unfortunately, I do not have a Synology NAS to test with myself. The script is currently using the python dateutil module, however from python 3.9 the built-in zoneinfo module is also available. If the above failed, it would be interesting to see what result the following gives in that case. from dateutil.parser import isoparse
from zoneinfo import ZoneInfo
start = isoparse("2023-06-26 00:00:00")
start
print(start)
influxtz = ZoneInfo("Australia/Sydney")
influxtz
print(influxtz)
type(influxtz)
start = start.replace(tzinfo=influxtz)
start
print(start) When originally writing the script, I tested the zoneinfo module vs. dateutil and found issues with zoneinfo, so I did not use it. |
Hi @mcbirse , yes, I am running the script on a Synology NAS. The results of the first script are as follows: The second script was a bit more fiddly. The version of Python 3 currently installed in the Synology DSM is 3.8.15, so ZoneInfo is not available. As noted above by @clukas1967, you can install Python 3.9, but it does not include PIP, and installing it is...beyond my abilities, so I could not install python-dateutil in Python 3. I ended up installing backports.zoneinfo and updated the second line in your second script to The results of the modified second script are as follows: So ZoneInfo looks like a winner for Synology NAS users. I will modify my version of tesla-history.py to use ZoneInfo, but it will be referencing backports.zoneinfo, so it will have to be a special 'Synology NAS users' version of the script. |
I got nothing.. got no option to log in. |
@simonrb2000 - Just to make sure we aren't missing anything, did you follow the login instructions here? https://github.com/jasonacox/Powerwall-Dashboard/tree/main/tools/tesla-history#setup-and-logging-in-to-tesla-account |
Great tool! How did I not know about this? I've been using Powerwall Dashboard since January but I've had the system much longer, so getting the old data in the dash is amazing. Thanks! Just one question: Is there any way to fetch historical SOC for each Powerwall? One of mine is reading much lower than it should (but not warranty-claim low yet - 10.4kWh) and some history on it would be great. |
Hey @ceeeekay - Glad you found the tool and find it useful! It's a godsend to be honest and was well worth the time spent developing and testing, and I still use it frequently myself... typically due to outages when you need to retrieve some missing data. Unfortunately, as far as I'm aware there isn't any way to retrieve what you are after, which I'm guessing you mean the Powerwall Capacity (full pack energy) of your PW's. |
100%! ❤️ Same here. Thanks @mcbirse 🙏 |
@mcbirse - thanks. I sort of assumed it wouldn't be available if it wasn't already included, but you never know for sure. @jasonacox - I wonder if it's worth having this automatically run with a ~1 day history after an upgrade or restart? I've always been aware I'm creating gaps whenever I upgrade the dash, or patch & reboot the host, and didn't know I could recover the data this easily. Also I feel like this tool should be mentioned on the main README.md. I only found it because I was digging around in the install directory. For new users it would be nice to start out with a bunch of history already in the DB. |
++1 on this suggestion.
Been running PWD now for quite some time, and there are multiple days per month (lets say 3 days on average which is 10% of a month so significant) where there is some hiccup between PWD and the Tesla GW.
In the last week alone this happened twice. The first “outage” was ~5 hrs and the second was ~4.5 hrs. See red circles in plot below.
I’ve seen gaps of over 12 hrs, and in one case 2 months ago my Tesla GW lost its mind completely and the network connection had to be reprovisioned. I don’t check PWD every day, so I didn’t catch it for over a week. I can’t be the only one having these issues.
As previously discussed on this thread, my PWD instance runs on a Synology DS1614+ with 10Gbps fiber connection to my core switch and backup power for both. So I can be 100% confident that the Synology box is not the problem. Ergo….
A feature that would “close” such gaps automatically would be a truly welcome and nice peace of mind, esp since the code for historical grabbing already exists and is tested. The basic problem is the current architecture of PWD is essentially stateless. When it queries, it logs whatever it gets for that instant in time. If it gets nothing, then nothing happens and there’s no trigger for a future query.
The way I would craft the RFE would be like this:
* PWD keeps running tally of last successful GW query w/valid returned data. (A simple DB query should be sufficient for this, or it could be a separate running register.)
* For each future GW query loop…
* Pull current data query. If no valid returned data, exit (same behavior as now)
* If valid data received, check to see delta-T between now and last valid data
* If delta-T > 1 polling interval then enter recovery subroutine to query for all intermediate data now that connectivity has been restored
* Subroutine sequentially moves through all query timepoints with null data in DB and attempts to requery. If valid data received, overwrite null data in DB.
* I’m assuming there are empty rows in the DB for the missing intervals. If not then I guess question is can one retroactively insert data into a TSDB?
* If one wished to bound this activity, the “lookback” could be limited to some hard limit vs. just the previous good data timestamp.
* In this case I’d recommend 12 or 24 hours as lookback limit
* Not sure why you’d “cap” the lookback period if you have a known last good data timestamp and the code has the ability to query arbitrarily far back in the past.
* Obvious exception conditions to manage:
* PWD 1st time ever startup
* After first ever coldstart of PWD there will obviously be no valid previous history timestamp.
* So if this timestamp is blank, obviously there would be no lookback.
* PWD warmstart after server outage (e.g. power failure, server down for code upgrade or other maintenance, or Docker failure)
* Have to decide how to handle this. If the “last good timestamp” suggested above is stored in the DB then this can be checked on a warmstart and treated like any other temporary GW outage scenario.
* I had this exact scenario a few months ago when I reorganized the disks on my Synology and had to move the whole Docker instance with PWD to another drive and it took several days to figure out how to do it successfully and PWD was administratively down during this period.
* PWD warmstart from backup
* I am now doing standalone backups of my PWD + Docker folder trees after discovering that Synology Hyperbackup does not get them.
* For other Synology users – if you are “checking” the “Docker” app in the backup menu all it is getting is the app **NOT** the data subdirectories
* Never had to do this yet, but let’s say I had a crash and had to restore a previous PWD backup.
* Think as long as the Docker, PWD, etc. were all properly reinstalled in the right places, then PWD would just see this as a warmstart as above. But I flag the use case in case the internal architecture somehow would treat it differently.
This approach minimizes the brain surgery and keeps the overall stateless model while imposing a minor and simple compare with ability to conditionally execute a finite number of additional stateless queries under a single exception condition.
Regarding the issue with long term GW unreachability for unknown reasons, I would also add a second RFE to automatically send an administrator email alert every 12 or 24 hrs that no data has been received.
…-cl
From: Chris ***@***.***>
Sent: Sunday, September 15, 2024 6:04 PM
To: jasonacox/Powerwall-Dashboard ***@***.***>
Cc: Chuck ***@***.***>; Mention ***@***.***>
Subject: Re: [jasonacox/Powerwall-Dashboard] Any way to import all historical data? (Issue #12)
@mcbirse<https://url.emailprotection.link/?buEq4eyNl9NRYRhKUfm4VcIXnQvQKXIpCFfjlfOsV1q63OaLvHn_KhEd_Bevhc7IBB9u36YB9udLIqiAieUX0Lw~~> - thanks. I sort of assumed it wouldn't be available if it wasn't already included, but you never know for sure.
@jasonacox<https://url.emailprotection.link/?buEq4eyNl9NRYRhKUfm4VcMwSNIehMFBNnvjb-DyNv8bvm7mbRwtn7rSZcXdsrQB5hIGPOTdmiYHogCpOQU4Epg~~> - I wonder if it's worth having this automatically run with a ~1 day history after an upgrade or restart? I've always been aware I'm creating gaps whenever I upgrade the dash, or patch & reboot the host, and didn't know I could recover the data this easily.
Also I feel like this tool should be mentioned on the main README.md. I only found it because I was digging around in the install directory. For new users it would be nice to start out with a bunch of history already in the DB.
—
Reply to this email directly, view it on GitHub<https://url.emailprotection.link/?b9Ve3c_HooY3pAjupsAXKS763DBPxxTPJgbqiADMLP8as5cAojxTufUVdYRJsV6AfwBV3eoCykiswH8-2PD5BvohAreZpWNc5q_YMTt4pmfzSXcZ0ir8nmErwqUzmXhAvEEeLxTd1oleZlQ6QWGBvATm8prcDrNyPryej_NAQnYk~>, or unsubscribe<https://url.emailprotection.link/?bGcW7RWEOBEExltVFNdoiAzuSLb_XtNqf441768nRCBoMEe4FHMWPuTf-aDu-QphUZOt11IPYUtgJq9yNAJ9nce1r8d6O7J9GdExPblT6pfgraJsyQRimM41ZzSR4lMsEcvsku8i93wYtnJzAGunsZUS06We32eVmGrewlp8mLNHSMiSmZL8IQWvzcoWZ9jDZ7J7_ajB469xCdPaMV3-HgC3DYv7W-6ADEJywcK58yAIVAYnf6gNoJZ7meI_5AOgw>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
I like the idea of having the history import at setup time (and update during upgrades). It would require that the user signs in to the Tesla cloud which would create another set during setup. It probably makes sense to make it an option. TODO |
Hi Thank you for this. I have go the dashboard up and running and would like to import my historical data. However how do I install the required python modules: pip install python-dateutil teslapy influxdb when i run this I get the error "error: externally-managed-environment" I have on another linux VM run the https://github.com/netzero-labs /tesla-solar-download |
The "externally-managed-environment" error typically occurs when you're trying to install packages in a virtual environment that's not properly activated. To resolve this, try activating your virtual environment before running the pip install command. If you're using a Python environment like conda, you can activate it with conda activate <env_name>. If you're using a virtual environment created with python -m venv, you can activate it with source <env_name>/bin/activate (on Linux/Mac) or <env_name>\Scripts\activate (on Windows). |
It seems the DB is populated from the time of install?
Any way to pull in all the history?
The text was updated successfully, but these errors were encountered: