Skip to content

Commit

Permalink
Merge commit 'f2be7dffef15986d08d068f0e469da66a612c26f' of https://gi…
Browse files Browse the repository at this point in the history
  • Loading branch information
KaiHuaDou committed Mar 1, 2024
2 parents 9fa09e8 + f2be7df commit 625803f
Show file tree
Hide file tree
Showing 87 changed files with 2,120 additions and 598 deletions.
2 changes: 1 addition & 1 deletion .gitattributes
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Tell git what files are txt
*.py text diff=python
*.pyj text diff=python
*.recipe text diff=python
*.recipe text diff=python linguist-language=python
*.coffee text
*.js text
*.pot text
Expand Down
44 changes: 43 additions & 1 deletion Changelog.txt
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,48 @@
# - title by author
# }}}

{{{ 7.6.0 2024-03-01

:: new features

- [major 1979022] E-book viewer: Allow saving current settings in "profiles" that can be quickly and easily swapped between

To create a profile or switch to a previously saved profile access "Profiles" from the viewer controls or press the `Alt+P` shortcut.

- [2053144] Edit book: Add a shortcut `Ctrl+M` to merge selected files

- Get books: Add support for Amazon Mexico

- A new toolbar button to show all available actions in sub menus. Can be added via `Preferences->Toolbars & menus`

- Edit book: Allow selecting multiple books to edit at once, opening all selected books in separate editor instances

:: bug fixes

- [2054617] Cover grid: Fix dragging the mouse while holding shift to extend the selection not working well

- [2054934] E-book viewer: Fix doing a multi-page selections sometimes causing the start of the selection to move backwards

- Edit book: Live CSS: Fix regression causing incorrect colors in calibre 7

- Windows: Fix a regression in calibre 7 that caused images in long text columns to not be displayed in the tooltip for the tooltip

- Fix disabled items in menus having blurry text

- Content server: Fix a regression in the previous release that caused an error when doing a search/sort on some browsers

:: improved recipes
- New Yorker
- Moneycontrol
- Swarajya Mag
- nautil.us
- Pro Physik

:: new recipes
- The Week UK by unkn0wn
- Andhrajyothy by unkn0wn
}}}

{{{ 7.5.1 2024-02-09

:: new features
Expand Down Expand Up @@ -52,7 +94,7 @@

- Fix a regression in 7.2 that caused the popup used for editing fields in the book list to be mis-positioned on very wide monitors

- Version 7.5.1 fixes a bug in 7.5.0 where calibre would not start up using dark colors when the system was in dark mode on some windows installs
- [2052766] Version 7.5.1 fixes a bug in 7.5.0 where calibre would not start up using dark colors when the system was in dark mode on some windows installs and another bug that could cause errors when using cover grid mode with covers stored in CMYK colorspace

:: improved recipes
- El Diplo
Expand Down
4 changes: 2 additions & 2 deletions manual/template_lang.rst
Original file line number Diff line number Diff line change
Expand Up @@ -415,7 +415,7 @@ Examples:

* ``program: field('series') == 'foo'`` returns ``'1'`` if the book's series is 'foo', otherwise ``''``.
* ``program: 'f.o' in field('series')`` returns ``'1'`` if the book's series matches the regular expression ``f.o`` (e.g., `foo`, `Off Onyx`, etc.), otherwise ``''``.
* ``program: 'science' inlist field('#genre')`` returns ``'1'`` if any of the book's genres match the regular expression ``science``, e.g., `Science`, `History of Science`, `Science Fiction` etc.), otherwise ``''``.
* ``program: 'science' inlist field('#genre')`` returns ``'1'`` if any of the book's genres match the regular expression ``science``, e.g., `Science`, `History of Science`, `Science Fiction` etc., otherwise ``''``.
* ``program: '^science$' inlist field('#genre')`` returns ``'1'`` if any of the book's genres exactly match the regular expression ``^science$``, e.g., `Science`. The genres `History of Science` and `Science Fiction` don't match. If there isn't a match then returns ``''``.
* ``program: if field('series') != 'foo' then 'bar' else 'mumble' fi`` returns ``'bar'`` if the book's series is not ``foo``. Otherwise it returns ``'mumble'``.
* ``program: if field('series') == 'foo' || field('series') == '1632' then 'yes' else 'no' fi`` returns ``'yes'`` if series is either ``'foo'`` or ``'1632'``, otherwise ``'no'``.
Expand Down Expand Up @@ -875,7 +875,7 @@ To accomplish this, we:

1. Create a composite field (give it lookup name #aa) containing ``{series}/{series_index} - {title}``. If the series is not empty, then this template will produce `series/series_index - title`.
2. Create a composite field (give it lookup name #bb) containing ``{#genre:ifempty(Unknown)}/{author_sort}/{title}``. This template produces `genre/author_sort/title`, where an empty genre is replaced with `Unknown`.
3. Set the save template to ``{series:lookup(.,#aa,#bb}``. This template chooses composite field ``#aa`` if series is not empty and composite field ``#bb`` if series is empty. We therefore have two completely different save paths, depending on whether or not `series` is empty.
3. Set the save template to ``{series:lookup(.,#aa,#bb)}``. This template chooses composite field ``#aa`` if series is not empty and composite field ``#bb`` if series is empty. We therefore have two completely different save paths, depending on whether or not `series` is empty.

Tips
-----
Expand Down
2 changes: 2 additions & 0 deletions manual/viewer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -301,6 +301,8 @@ be customised in the viewer :guilabel:`Preferences`. The default shortcuts are l
- Toggle Table of Contents
* - :kbd:`Ctrl+S`
- Read aloud
* - :kbd:`Alt+P`
- Change settings quickly by creating and switching to :guilabel:`profiles`
* - :kbd:`Alt+f`
- Follow links with the keyboard
* - :kbd:`Ctrl+C`
Expand Down
122 changes: 122 additions & 0 deletions recipes/andhrajyothy_ap.recipe
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
from calibre.web.feeds.news import BasicNewsRecipe
import json
from datetime import date
from collections import defaultdict

# figure out your local edition id from the log of this recipe
edi_id = 182 # NTR VIJAYAWADA - 182

today = date.today().strftime('%d/%m/%Y')

# for older edition
# today = '15/01/2024'

day, month, year = (int(x) for x in today.split('/'))
dt = date(year, month, day)
today = today.replace('/', '%2F')

index = 'https://epaper.andhrajyothy.com'

class andhra(BasicNewsRecipe):
title = 'ఆంధ్రజ్యోతి - ఆంధ్రప్రదేశ్'
language = 'te'
__author__ = 'unkn0wn'
masthead_url = 'https://upload.wikimedia.org/wikipedia/commons/0/01/Andhra_Jyothi_newspaper_logo.png'
timefmt = ' [' + dt.strftime('%b %d, %Y') + ']'
description = 'Articles from the ABN Andhra Jyothy epaper, digital edition'
encoding = 'utf-8'
remove_empty_feeds = True

def __init__(self, *args, **kwargs):
BasicNewsRecipe.__init__(self, *args, **kwargs)
if self.output_profile.short_name.startswith('kindle'):
self.title = 'ఆంధ్రజ్యోతి ' + dt.strftime('%b %d, %Y')

extra_css = '''
.cap { text-align:center; font-size:small; }
img { display:block; margin:0 auto; }
'''

def parse_index(self):

self.log(
'\n***\nif this recipe fails, report it on: '
'https://www.mobileread.com/forums/forumdisplay.php?f=228\n***\n'
)

get_edition = index + '/Home/GetEditionsHierarchy'
edi_data = json.loads(self.index_to_soup(get_edition, raw=True))
self.log('## For your local edition id, modify this recipe to match your edi_id from the cities below\n')
for edi in edi_data:
if edi['org_location'] in {'Magazines', 'Navya Daily'}:
continue
self.log(edi['org_location'])
cities = []
for edi_loc in edi['editionlocation']:
cities.append(edi_loc['Editionlocation'] + ' - ' + edi_loc['EditionId'])
self.log('\t', ',\n\t'.join(cities))

self.log('\nDownloading: Edition ID - ', edi_id)
url = index + '/Home/GetAllpages?editionid=' + str(edi_id) + '&editiondate=' + today
main_data = json.loads(self.index_to_soup(url, raw=True))

feeds_dict = defaultdict(list)

for page in main_data:
sec_name = page['PageNo'] + 'వ పేజీ'
if page['PageNumber'] == 'Page 1':
self.cover_url = page['HighResolution']
art = index + '/Home/getingRectangleObject?pageid=' + str(page['PageId'])
raw2 = self.index_to_soup(art, raw=True)
art_data = json.loads(raw2)
for snaps in art_data:
section = sec_name
url = str(snaps['OrgId'])
if snaps['ObjectType'] == 4:
continue
feeds_dict[section].append({"title": '', "url": url})
return [(section, articles) for section, articles in feeds_dict.items()]

def preprocess_raw_html(self, raw, *a):
data = json.loads(raw)
body = ''
for x in data['StoryContent']:
if x['Headlines']:
if len(x['Headlines']) > 0:
body += '<h1>' + x['Headlines'][0].replace('\n', ' ') + '</h1>'
for y in x['Headlines'][1:]:
body += '<h4>' + y.replace('\n', ' ') + '</h4>'
if data['LinkPicture']:
for pics in data['LinkPicture']:
if pics['fullpathlinkpic']:
body += '<div><img src="{}"></div>'.format(pics['fullpathlinkpic'])
if pics['caption']:
body += '<div class="cap">' + pics['caption'] + '</div><p>'
for x in data['StoryContent']:
if x['Body'] and x['Body'] != '':
body += '<span class="body">' + x['Body'] + '</span>'
# if data['filepathstorypic']: # this gives you a snap image of the article from page
# body += '<div><img src="{}"></div>'.format(data['filepathstorypic'].replace('\\', '/'))
if body.strip() == '':
self.abort_article('no article')
return '<html><body><div>' + body + '</div></body></html>'

def populate_article_metadata(self, article, soup, first):
article.url = '***'
h1 = soup.find('h1')
h4 = soup.find('h4')
body = soup.find(attrs={'class':'body'})
if h4:
article.summary = self.tag_to_string(h4)
article.text_summary = article.summary
elif body:
article.summary = ' '.join(self.tag_to_string(body).split()[:15]) + '...'
article.text_summary = article.summary
article.title = 'ఆంధ్రజ్యోతి'
if h1:
article.title = self.tag_to_string(h1)
elif body:
article.title = ' '.join(self.tag_to_string(body).split()[:7]) + '...'

def print_version(self, url):
return index + '/User/ShowArticleView?OrgId=' + url
122 changes: 122 additions & 0 deletions recipes/andhrajyothy_tel.recipe
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
from calibre.web.feeds.news import BasicNewsRecipe
import json
from datetime import date
from collections import defaultdict

# figure out your local edition id from the log of this recipe
edi_id = 225 # TELANGANA MAIN II - 225

today = date.today().strftime('%d/%m/%Y')

# for older edition
# today = '15/01/2024'

day, month, year = (int(x) for x in today.split('/'))
dt = date(year, month, day)
today = today.replace('/', '%2F')

index = 'https://epaper.andhrajyothy.com'

class andhra(BasicNewsRecipe):
title = 'ఆంధ్రజ్యోతి - తెలంగాణ'
language = 'te'
__author__ = 'unkn0wn'
masthead_url = 'https://upload.wikimedia.org/wikipedia/commons/0/01/Andhra_Jyothi_newspaper_logo.png'
timefmt = ' [' + dt.strftime('%b %d, %Y') + ']'
description = 'Articles from the ABN Andhra Jyothy epaper, digital edition'
encoding = 'utf-8'
remove_empty_feeds = True

def __init__(self, *args, **kwargs):
BasicNewsRecipe.__init__(self, *args, **kwargs)
if self.output_profile.short_name.startswith('kindle'):
self.title = 'ఆంధ్రజ్యోతి ' + dt.strftime('%b %d, %Y')

extra_css = '''
.cap { text-align:center; font-size:small; }
img { display:block; margin:0 auto; }
'''

def parse_index(self):

self.log(
'\n***\nif this recipe fails, report it on: '
'https://www.mobileread.com/forums/forumdisplay.php?f=228\n***\n'
)

get_edition = index + '/Home/GetEditionsHierarchy'
edi_data = json.loads(self.index_to_soup(get_edition, raw=True))
self.log('## For your local edition id, modify this recipe to match your edi_id from the cities below\n')
for edi in edi_data:
if edi['org_location'] in {'Magazines', 'Navya Daily'}:
continue
self.log(edi['org_location'])
cities = []
for edi_loc in edi['editionlocation']:
cities.append(edi_loc['Editionlocation'] + ' - ' + edi_loc['EditionId'])
self.log('\t', ',\n\t'.join(cities))

self.log('\nDownloading: Edition ID - ', edi_id)
url = index + '/Home/GetAllpages?editionid=' + str(edi_id) + '&editiondate=' + today
main_data = json.loads(self.index_to_soup(url, raw=True))

feeds_dict = defaultdict(list)

for page in main_data:
sec_name = page['PageNo'] + 'వ పేజీ'
if page['PageNumber'] == 'Page 1':
self.cover_url = page['HighResolution']
art = index + '/Home/getingRectangleObject?pageid=' + str(page['PageId'])
raw2 = self.index_to_soup(art, raw=True)
art_data = json.loads(raw2)
for snaps in art_data:
section = sec_name
url = str(snaps['OrgId'])
if snaps['ObjectType'] == 4:
continue
feeds_dict[section].append({"title": '', "url": url})
return [(section, articles) for section, articles in feeds_dict.items()]

def preprocess_raw_html(self, raw, *a):
data = json.loads(raw)
body = ''
for x in data['StoryContent']:
if x['Headlines']:
if len(x['Headlines']) > 0:
body += '<h1>' + x['Headlines'][0].replace('\n', ' ') + '</h1>'
for y in x['Headlines'][1:]:
body += '<h4>' + y.replace('\n', ' ') + '</h4>'
if data['LinkPicture']:
for pics in data['LinkPicture']:
if pics['fullpathlinkpic']:
body += '<div><img src="{}"></div>'.format(pics['fullpathlinkpic'])
if pics['caption']:
body += '<div class="cap">' + pics['caption'] + '</div><p>'
for x in data['StoryContent']:
if x['Body'] and x['Body'] != '':
body += '<span class="body">' + x['Body'] + '</span>'
# if data['filepathstorypic']: # this gives you a snap image of the article from page
# body += '<div><img src="{}"></div>'.format(data['filepathstorypic'].replace('\\', '/'))
if body.strip() == '':
self.abort_article('no article')
return '<html><body><div>' + body + '</div></body></html>'

def populate_article_metadata(self, article, soup, first):
article.url = '***'
h1 = soup.find('h1')
h4 = soup.find('h4')
body = soup.find(attrs={'class':'body'})
if h4:
article.summary = self.tag_to_string(h4)
article.text_summary = article.summary
elif body:
article.summary = ' '.join(self.tag_to_string(body).split()[:15]) + '...'
article.text_summary = article.summary
article.title = 'ఆంధ్రజ్యోతి'
if h1:
article.title = self.tag_to_string(h1)
elif body:
article.title = ' '.join(self.tag_to_string(body).split()[:7]) + '...'

def print_version(self, url):
return index + '/User/ShowArticleView?OrgId=' + url
Binary file added recipes/icons/andhrajyothy_ap.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added recipes/icons/andhrajyothy_tel.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added recipes/icons/nautilus.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified recipes/icons/the_week_magazine_free.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added recipes/icons/the_week_uk.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 625803f

Please sign in to comment.