Close Menu
    Trending
    • How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1
    • From Training to Drift Monitoring: End-to-End Fraud Detection in Python | by Aakash Chavan Ravindranath, Ph.D | Jul, 2025
    • Using Graph Databases to Model Patient Journeys and Clinical Relationships
    • Cuba’s Energy Crisis: A Systemic Breakdown
    • AI Startup TML From Ex-OpenAI Exec Mira Murati Pays $500,000
    • STOP Building Useless ML Projects – What Actually Works
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Website Feature Engineering at Scale: PySpark, Python & Snowflake
    Artificial Intelligence

    Website Feature Engineering at Scale: PySpark, Python & Snowflake

    Team_AIBS NewsBy Team_AIBS NewsMay 5, 2025No Comments11 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    and Downside

    Think about you’re gazing a database containing hundreds of retailers throughout a number of international locations, every with its personal web site. Your objective? Establish the highest candidates to associate with in a brand new enterprise proposal. Manually looking every website is unimaginable at scale, so that you want an automatic method to gauge “how good” every service provider’s on-line presence is. Enter the web site high quality rating: a numeric function (0-10) that captures key features of a website’s professionalism, content material depth, navigability, and visual product listings with costs. By integrating this rating into your machine studying pipeline, you acquire a robust sign that helps your mannequin distinguish the highest-quality retailers and dramatically enhance choice accuracy.

    Desk of Contents

    • Introduction and Downside
    • Technical Implementation
      • Authorized & Moral Concerns
      • Getting Began
      • Fetch HTML Script in Python
      • Assign a High quality Rating Script in Pyspark
    • Conclusion

    Technical Implementation

    Authorized & Moral Concerns

    Be a great citizen of the online.

    • This scraper solely counts phrases, hyperlinks, photographs, scripts and easy “contact/about/value” flags, it does not extract or retailer any personal or delicate knowledge.
    • Throttle responsibly: use modest concurrency (e.g. CONCURRENT_REQUESTS ≤ 10), insert small pauses between batches, and keep away from hammering the identical area.
    • Retention coverage: when you’ve computed your options or scores, purge uncooked HTML inside an inexpensive window (e.g. after 7-14 days).
    • For very giant runs, or for those who plan to share extracted HTML, think about reaching out to website homeowners for permission or notifying them of your utilization.

    Getting Began

    Right here’s your folder construction when you clone the repository https://github.com/lucasbraga461/feat-eng-websites/ :

    Code block 1. GitHub repository folder construction

    ├── src
    │   ├── helpers
    │   │   └── snowflake_data_fetch.py
    │   ├── p1_fetch_html_from_websites.py
    │   └── process_data
    │       ├── s1_gather_initial_table.sql
    │       └── s2_create_table_with_website_feature.sql
    ├── notebooks
    │   └── ps_website_quality_score.ipynb
    ├── knowledge
    │   └── websites_initial_table.csv
    ├── README.md
    ├── necessities.txt
    └── venv
    └── .gitignore
    └── .env

    Your dataset needs to be ideally in Snowflake, right here’s to present an thought on how you must put together it, in case it comes from completely different tables, confer with src/process_data/s1_gather_initial_table.sql, right here’s a snippet of it:

    Code block 2. s1_gather_initial_table.sql

    CREATE OR REPLACE TABLE DATABASE.SCHEMA.WEBSITES_INITIAL_TABLE AS
    (
    SELECT
       DISTINCT COUNTRY, WEBSITE_URL
    FROM DATABASE.SCHEMA.COUNTRY_ARG_DATASET
    WHERE WEBSITE_URL IS NOT NULL
    ) UNION ALL (
      
    SELECT
       DISTINCT COUNTRY, WEBSITE_URL
    FROM DATABASE.SCHEMA.COUNTRY_BRA_DATASET
    WHERE WEBSITE_URL IS NOT NULL
    ) UNION ALL (
    [...]
    SELECT
       DISTINCT COUNTRY, WEBSITE_URL
    FROM DATABASE.SCHEMA.COUNTRY_JAM_DATASET
    WHERE WEBSITE_URL IS NOT NULL
    )
    ;

    Right here’s what this preliminary desk ought to appear like:

    Determine 1. Preliminary desk

    Fetch HTML Script in Python

    Having the info prepared, that is the way you name it, let’s say you’ve got your knowledge in Snowflake:

    Code block 3. p1_fetch_html_from_websites.py utilizing Snowflake dataset

    cd ~/Doc/GitHub/feat-eng-websites
    python3 src/p1_fetch_html_from_websites.py -c BRA --use_snowflake
    • The python script is anticipating the snowflake desk to be in DATABASE.SCHEMA.WEBSITES_INITIAL_TABLE which may be adjusted to your use case on the code itself.

    That can open a window in your browser asking you to authenticate to Snowflake. When you authenticate it, it’ll pull the info from the designated desk and proceed with fetching the web site content material.

    In the event you select to drag this knowledge from a CSV file then don’t use the flag on the finish and name it this manner:

    Code block 4. p1_fetch_html_from_websites.py utilizing CSV dataset

    cd ~/Doc/GitHub/feat-eng-websites
    python3 src/p1_fetch_html_from_websites.py -c BRA

    GIF 1. Working p1_fetch_html_from_websites.py

    Right here’s why this script is highly effective at fetching web site content material evaluating to a extra primary method, see Desk 1:

    Desk 1. Benefits of this Fetch HTML script evaluating with a primary implementation

    Approach Fundamental Strategy This script p1_fetch_html_from_websites.py
    HTTP fetching Blocking requests.get() calls one‐by‐one Async I/O with asyncio + aiohttp to challenge many requests in parallel and overlap community waits
    Person-Agent Single default UA header for all requests Rotate by means of an inventory of actual browser UA strings to evade primary bot‐detection and throttling
    Batching Load & course of your complete URL checklist in a single go Break up into chunks through BATCH_SIZE so you’ll be able to checkpoint, restrict reminiscence use, and get well mid-run
    Retries & Timeouts Depend on library defaults or crash on sluggish/unresponsive servers Express MAX_RETRIES and TIMEOUT settings to retry transient failures and certain per-request wait instances
    Concurrency restrict Sequential or unbounded parallel calls (risking overload) CONCURRENT_REQUESTS + aiohttp.TCPConnector + asyncio.Semaphore to throttle max in-flight connections
    Occasion loop Single loop reused, can hit “certain to completely different loop” errors when restarting use Create a contemporary asyncio occasion loop per batch to keep away from loop/semaphore binding errors and guarantee isolation

    It’s typically higher to retailer uncooked HTML in a correct database (Snowflake, BigQuery, Redshift, Postgres, and so on.) fairly than in CSV recordsdata. A single web page’s HTML can simply exceed spreadsheet limits (e.g. Google Sheets caps at 50,000 characters per cell), and managing lots of of pages would bloat and decelerate CSVs. Whereas we embrace a CSV possibility right here for fast demos or minimal setups, giant‐scale scraping and Feature Engineering are much more dependable and performant when run in a scalable knowledge warehouse like Snowflake.

    When you run it for say BRA, ARG and JAM that is how your knowledge folder will appear like

    Code block 5. Folder construction when you ran it for ARG, BRA and JAM

    ├── knowledge
    │   ├── website_scraped_data_ARG.csv
    │   ├── website_scraped_data_BRA.csv
    │   ├── website_scraped_data_JAM.csv
    │   └── websites_initial_table.csv

    Confer with Determine 2 to visualise what the output of the primary script generates, i.e. visualize the desk website_scraped_data_BRA. Observe that one of many columns is html_content, which is a really giant area because it takes the entire HTML content material of the web site.

    Determine 2. Instance of desk website_scraped_data_BRA generated with first python script

    Assign a High quality Rating Script in Pyspark

    As a result of every web page’s HTML may be huge, and also you’ll have lots of or hundreds of pages, you’ll be able to’t effectively course of or retailer all that uncooked textual content in flat recordsdata. As an alternative, we hand off to Spark through Snowpark (Snowflake’s Pyspark engine) for scalable function extraction. See notebooks/ps_website_quality_score.ipynb for a ready-to-run instance: simply choose the Python kernel in Snowflake and import the built-in Snowpark libraries to spin up your Spark session (see Code Block 6).

    Code block 6. Folder construction when you ran it for ARG, BRA and JAM

    import pandas as pd
    from bs4 import BeautifulSoup
    import re
    from tqdm import tqdm
    
    import Snowflake.snowpark as snowpark
    from snowflake.snowpark.capabilities import col, lit, udf
    from snowflake.snowpark.context import get_active_session
    session = get_active_session()

    Every market speaks its personal language and follows completely different conventions, so we bundle all these guidelines right into a easy country-specific config. For every nation we outline the contact/about key phrases and value‐sample regexes that sign a “good” service provider website, then level the script on the corresponding Snowflake enter and output tables. This makes the function extractor absolutely data-driven, reusing the identical code for each area with only a change of config.

    Code block 7. Config file

    country_configs = {
       "ARG": {
           "identify": "Argentina",
           "contact_keywords": ["contacto", "contáctenos", "observaciones"],
           "about_keywords": ["acerca de", "sobre nosotros", "quiénes somos"],
           "price_patterns": [r'ARSs?d+', r'$s?d+', r'd+.d{2}s?$'],
           "input_table": "DATABASE.SCHEMA.WEBSITE_SCRAPED_DATA_ARG",
           "output_table": "DATABASE.SCHEMA.WEBSITE_QUALITY_SCORES_ARG"
       },
       "BRA": {
           "identify": "Brazil",
           "contact_keywords": ["contato", "fale conosco", "entre em contato"],
           "about_keywords": ["sobre", "quem somos", "informações"],
           "price_patterns": [r'R$s?d+', r'd+.d{2}s?R$'],
           "input_table": "DATABASE.SCHEMA.WEBSITE_SCRAPED_DATA_BRA",
           "output_table": "DATABASE.SCHEMA.WEBSITE_QUALITY_SCORES_BRA"
       },
    [...]

    Earlier than we will register and use our Python scraper logic inside Snowflake, we first create a stage, a persistent storage space, by working the DDL in Code Block 8. This creates a named location @STAGE_WEBSITES underneath your DATABASE.SCHEMA, the place we’ll add the UDF package deal (together with dependencies like BeautifulSoup and lxml). As soon as the stage exists, we deploy the extract_features_udf there, making it obtainable to any Snowflake session for HTML parsing and have extraction. Lastly, we set the country_code variable to kick off the pipeline for a selected nation, earlier than looping by means of different nation codes as wanted.

    Code block 8. Create a stage folder to maintain the UDFs created

    -- CREATE STAGE DATABASE.SCHEMA.STAGE_WEBSITES;
    
    country_code = "BRA"

    Now at this a part of the code, confer with Code block 9, we’ll outline the UDF operate ‘extract_features_udf’ that may extract data from the HTML content material, right here’s what this a part of the code does:

    • Defines the Snowpark UDF known as ‘extract_features_udf’ that lives within the Snowflake stage folder beforehand created
    • Parses the uncooked HTML with BeautifulSoup well-known library
    • Extract textual content options:
      • Complete phrase depend
      • Web page title size
      • Flags for ‘contact’ and ‘about’ pages.
    • Extracts structural options:
      • Variety of hyperlinks
      • Variety of photographs
      • Variety of
    • Detects product listings by searching for any value sample within the textual content
    • Returns a dictionary of all these counts/flags, or zeros if the HTML was empty or any error occurred

    Code block 9. Operate to extract HTML content material

    @udf(identify="extract_features_udf",
        is_permanent=True,
        substitute=True,
        stage_location="@STAGE_WEBSITES",
        packages=["beautifulsoup4", "lxml"])
    def extract_features(html_content: str, CONTACT_KEYWORDS: str, ABOUT_KEYWORDS: str, PRICE_PATTERNS: checklist) -> dict:
       """Extracts textual content, construction, and product-related options from HTML."""
       if not html_content:
           return {
               "word_count": 0, "title_length": 0, "has_contact_page": 0,
               "has_about_page": 0, "num_links": 0, "num_images": 0,
               "num_scripts": 0, "has_price_listings": 0
           }
    
       attempt:
           soup = BeautifulSoup(html_content, 'lxml')
           # soup = BeautifulSoup(html_content[:MAX_HTML_SIZE], 'lxml')
    
           # Textual content Options
           textual content = soup.get_text(" ", strip=True)
           word_count = len(textual content.break up())
    
           title = soup.title.string.strip() if soup.title and soup.title.string else ""
           has_contact = bool(re.search(CONTACT_KEYWORDS, textual content, re.I))
           has_about = bool(re.search(ABOUT_KEYWORDS, textual content, re.I))
    
           # Structural Options
           num_links = len(soup.find_all("a"))
           num_images = len(soup.find_all("img"))
           num_scripts = len(soup.find_all("script"))
    
           # Product Listings Detection
           # price_patterns = [r'€s?d+', r'd+.d{2}s?€', r'$s?d+', r'd+.d{2}s?$']
           has_price = any(re.search(sample, textual content, re.I) for sample in PRICE_PATTERNS)
    
           return {
               "word_count": word_count, "title_length": len(title), "has_contact_page": int(has_contact),
               "has_about_page": int(has_about), "num_links": num_links, "num_images": num_images,
               "num_scripts": num_scripts, "has_price_listings": int(has_price)
           }
    
       besides Exception:
           return {"word_count": 0, "title_length": 0, "has_contact_page": 0,
                   "has_about_page": 0, "num_links": 0, "num_images": 0,
                   "num_scripts": 0, "has_price_listings": 0}

    And the ultimate a part of the pyspark pocket book Course of and generate output desk does 4 most important issues:
    First it applies the UDF extract_features_udf on the uncooked HTML, producing a single options column that holds a small dict of counts/flags for every web page, see code block 10.

    Code block 10. Applies UDF to every row

    df_processed = df.with_column(
        "options", 
        extract_features(col("HTML_CONTENT"))
    )

    Secondly, it turns every key within the options dict into its personal column within the DataFrame (so that you get separate word_count, num_links, and so on.), see code block 11.

    Code block 11. Explode that dictionary into actual columns

    df_final = df_processed.choose(
        col("WEBSITE"),
        col("options")["word_count"].alias("word_count"),
        ...,
        col("options")["has_price_listings"].alias("has_price_listings")
    )

    Thirdly, primarily based on enterprise guidelines outlined by me, it builds a 0-10 rating by assigning factors for every function (e.g. word-count thresholds, presence of contact/about pages, product listings), see code block 12.

    Code block 12. Compute a single high quality rating

    df_final = df_final.with_column(
        "quality_score",
        ( (col("word_count") > 300).solid("int")*2
          + (col("word_count") > 100).solid("int")
          + (col("title_length") > 10).solid("int")
          + col("has_contact_page")
          + col("has_about_page")
          + (col("num_links") > 10).solid("int")
          + (col("num_images") > 5).solid("int")
          + col("has_price_listings")*3
        )
    )

    And at last it writes the ultimate desk again into Snowflake (changing any present desk) so you’ll be able to question or be a part of these high quality scores later.

    Code block 13. Write the output desk to Snowflake

    df_final.write.mode("overwrite").save_as_table(OUTPUT_TABLE)
    Determine 3. Closing desk, output of DATABASE.SCHEMA.WEBSITE_QUALITY_SCORES_BRA

    Conclusion

    As soon as computed and saved, the web site high quality rating turns into a simple enter to nearly any predictive mannequin, whether or not you’re coaching a logistic regression, random forest, or deep neural community. As one in every of your strongest options, it quantifies a service provider’s on-line maturity and reliability, complementing different knowledge like gross sales quantity or buyer critiques. By combining this web-derived sign along with your present metrics, you’ll be capable of rank, filter, and suggest companions much more successfully, and finally drive higher enterprise outcomes.

    GitHub repository implementation:

    Disclaimer

    The screenshots and figures on this article (e.g. of Snowflake question outcomes) have been created by the writer. Not one of the numbers are drawn from actual enterprise knowledge however have been manually generated for illustrative functions. Likewise, all SQL scripts are handcrafted examples; they’re not extracted from any dwell surroundings however are designed to carefully resemble what an organization utilizing Snowflake may encounter.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article🤖 Yapay Zeka Üretir, İnsan Yönlendirir: Geleceğin İşbirliği | by Aslı korkmaz | May, 2025
    Next Article 10 Charitable Organizations Entrepreneurs Should Support
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1

    July 1, 2025
    Artificial Intelligence

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025
    Artificial Intelligence

    Implementing IBCS rules in Power BI

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    AI Is Most Likely to Replace These 3 Professions: AI Experts

    April 7, 2025

    Como Construir Seu Primeiro Projeto de IA: Um Guia Descomplicado | by Laura Damaceno | Apr, 2025

    April 6, 2025

    AI Agents Are Becoming More Humanlike — and OpenAI Is Launching a New One in January. Are Entrepreneurs Ready to Embrace the Future?

    January 1, 2025
    Our Picks

    How to Access NASA’s Climate Data — And How It’s Powering the Fight Against Climate Change Pt. 1

    July 1, 2025

    From Training to Drift Monitoring: End-to-End Fraud Detection in Python | by Aakash Chavan Ravindranath, Ph.D | Jul, 2025

    July 1, 2025

    Using Graph Databases to Model Patient Journeys and Clinical Relationships

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.