Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intermediate URL redirect #140

Open
nmandic78 opened this issue Feb 16, 2024 · 7 comments
Open

Intermediate URL redirect #140

nmandic78 opened this issue Feb 16, 2024 · 7 comments

Comments

@nmandic78
Copy link

It looks google now only provide their intermediate URL that redirects to real news site URL:
'news.google.com/articles/CBMiU2h0dHBzOi8vd3d3LnRoZXZlcmdlLmNvbS8yMDI0LzIvMTQvMjQwNzI3OTIvYXBwbGUtdmlzaW9uLXByby1lYXJseS1hZG9wdGVycy1yZXR1cm5z0gEA?hl=en-US&gl=US&ceid=US%3Aen'

Tried to get redirected URL with requests, but it seems Google use javascript and this won't do. I get to consent page. I don't know how to tackle it without Selenium or similar and this is overhead I don't want for my project.
If someone has solution or pointer in right direction, I will be grateful.

@talhaanwarch
Copy link

Try this

urls = googlenews.get_links()

after getting the urls of news, you have to do it one by one. here is an example

from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.chrome.service import Service as ChromeService
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException

def get_final_url(initial_url):
    # Configure Chrome options for headless mode
    chrome_options = Options()
    chrome_options.add_argument("--headless")

    # Set up Chrome WebDriver
    driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=chrome_options)

    try:
        # Open the initial URL
        driver.get(initial_url)

        # Wait until certain elements are present indicating that content has loaded
        wait = WebDriverWait(driver, 10)
        final_url_element = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "article")))
        final_url = driver.current_url
        return final_url
    except TimeoutException:
        print("Timed out waiting for page to load")
        return None
    finally:
        # Close the WebDriver session
        driver.quit()

# Example usage:
initial_url = f"https://{urls[0]}"
final_url = get_final_url(initial_url)
if final_url:
    print("Final URL after content loaded:", final_url)

@nmandic78
Copy link
Author

@talhaanwarch , thank you. As I said, Selenium is overkill for my use case so I dropped this lib and solved what I needed with Bing Search API. Anyway, thank you and maybe somebody finds your snippet useful. Regards.

@deanm0000
Copy link

deanm0000 commented Apr 3, 2024

This is much simpler than it seems, you don't even need to BeautifulSoup it.

Use this:

def get_link_url(txt):
    i = txt.find("Opening")
    j = txt.find("a href=", i)
    k = txt.find('"', j + 8)
    return txt[j + 8 : k]

You still have to GET the intermediate URL but if you do:

resp=requests.get(intermediate_url)
real_link=get_link_url(resp.text)

It relies on a bit of code in the intermediate page that you're supposed to see if it doesn't redirect fast enough that tells you it's "Opening". You just use normal python find to look for that. Then you look for where the url begins immediately after that. Then you find where the url ends and extract it. Poof no selenium (or even bs4) required.

@HurinHu
Copy link
Member

HurinHu commented Apr 3, 2024

Be aware sending too many requests to Google may get 429 errors. Each link will send to Google first then get the actual link.

@deanm0000
Copy link

deanm0000 commented Apr 4, 2024

I ended up really wanting async support so I wrote my own which skips the intermediate URL altogether. In doing this DIY, I'm not actually sure where the intermediate URL comes from as the real URL is right there. It's not very pretty or typed so its not ready to be its own repo but if somebody wants to clean it up and incorporate it here or publish it elsewhere then please do:

import httpx
from bs4 import BeautifulSoup
from headers import HEADERS
from urllib.parse import quote_plus


async def search_news(search_terms, date_range=None):
    params = dict(q=quote_plus(search_terms), tbm="nws")
    if date_range is not None and isinstance(date_range, (list, tuple)):
        start_date = date_range[0].strftime("%m/%d/%Y")
        end_date = date_range[1].strftime("%m/%d/%Y")
        params["tbs"] = quote_plus(f"cdf:1,cd_min:{start_date},cd_max:{end_date}")
    dlclient = httpx.AsyncClient(http2=True, headers=HEADERS)
    resp = await dlclient.get(
        "https://www.google.com/search",
        params=params,
    )
    rbs = BeautifulSoup(resp, features="lxml")
    links = [
        x
        for x in rbs.find_all("a")
        if "href" in x.attrs
        and "https" in x.attrs["href"]
        and "google" not in x.attrs["href"]
    ]

    pages = []
    for link in links:
        url = link.attrs["href"]
        url_begin = url.find("https")
        url = url[url_begin:].split("&")[0]
        misc = link.find_all(string=True)
        misc = [x for x in misc if x.parent.name == "div"]
        pages.append({"url": url, "title": misc[0], "misc": misc[1:]})
    return pages

it assumes you have a file headers.py with a dict of headers in a variable called HEADERS. Google doesn't actually seem to mind if you don't use browser headers so it's probably superfluous.

@oldsiks
Copy link

oldsiks commented Aug 28, 2024

This is a demo I wrote by analyzing the URL redirection process

import json
from urllib.parse import quote
from lxml import etree
import requests

def get_google_params(url):
    response = requests.get(url)
    tree = etree.HTML(response.text)
    sign = tree.xpath('//c-wiz/div/@data-n-a-sg')[0]
    ts = tree.xpath('//c-wiz/div/@data-n-a-ts')[0]
    source = tree.xpath('//c-wiz/div/@data-n-a-id')[0]
    return source, sign, ts

def get_origin_url(source, sign, ts):
    url = f"https://news.google.com/_/DotsSplashUi/data/batchexecute"
    req_data = [[[
        "Fbv4je",
        f"[\"garturlreq\",[[\"zh-HK\",\"HK\",[\"FINANCE_TOP_INDICES\",\"WEB_TEST_1_0_0\"],null,null,1,1,\"HK:zh-Hant\",null,480,null,null,null,null,null,0,5],\"zh-HK\",\"HK\",1,[2,4,8],1,1,null,0,0,null,0],\"{source}\",{ts},\"{sign}\"]",
        None,
        "generic"
    ]]]
    payload = f"f.req={quote(json.dumps(req_data))}"
    headers = {
      'Host': 'news.google.com',
      # 'Content-Length': '712',
      'X-Same-Domain': '1',
      'Accept-Language': 'zh-CN',
      'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; rv:109.0) Gecko/20100101 Firefox/115.0',
      'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8',
      'Accept': '*/*',
      'Origin': 'https://news.google.com',
      # 'X-Client-Data': 'CPz5ygE=',
      'Sec-Fetch-Site': 'same-origin',
      'Sec-Fetch-Mode': 'cors',
      'Sec-Fetch-Dest': 'empty',
      'Referer': 'https://news.google.com/',
      'Accept-Encoding': 'gzip, deflate, br',
    }
    response = requests.post(url, headers=headers, data=payload)
    print(response.text)

if __name__ == '__main__':
    url = 'https://news.google.com/rss/articles/CBMifkFVX3lxTE9GWkFuUnJ4U0RFZU9KM3lHRkZWc3IxVF9WUWpRd3VnR2R4MkJMeXJ2cl9yXzBGcTJaTmFyckRDQ0Qtb21wNWdycDZ2RElrbW02U3NpRWVlRWtLYVhQS2wzdlgtQ0ZUdzJubDRJeGJuVU9SVUprcDhvVTlpTFpMUdIBgAFBVV95cUxQRjZURk1zZG1mZ0VVbUlmWFJsOUpMSXNNbzQ4S3BGUUFhdGpUaTk0ckM4VnUtVVduRmlmb0JVWHhHeDJZMkJhSDZPMkRxS1NCaV9ndlp6Z1dMRjNuSzFSYUxKWjAyNGpJeXpGQ2Q2SEdUUGdiNFBNZ1oxdkVOTGRLQg?oc=5'
    source, sign, ts = get_google_params(url)
    get_origin_url(source, sign, ts)

@wastu01
Copy link
Contributor

wastu01 commented Sep 3, 2024

I discovered that this repository includes a script for decoding Google News article URLs. Here's the link for reference:
https://github.com/SSujitX/google-news-url-decoder

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants