Close Menu
    Trending
    • Cuba’s Energy Crisis: A Systemic Breakdown
    • AI Startup TML From Ex-OpenAI Exec Mira Murati Pays $500,000
    • STOP Building Useless ML Projects – What Actually Works
    • Credit Risk Scoring for BNPL Customers at Bati Bank | by Sumeya sirmula | Jul, 2025
    • The New Career Crisis: AI Is Breaking the Entry-Level Path for Gen Z
    • Musk’s X appoints ‘king of virality’ in bid to boost growth
    • Why Entrepreneurs Should Stop Obsessing Over Growth
    • Implementing IBCS rules in Power BI
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Fourier Transform Applications in Literary Analysis
    Artificial Intelligence

    Fourier Transform Applications in Literary Analysis

    Team_AIBS NewsBy Team_AIBS NewsMarch 14, 2025No Comments11 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Poetry is usually seen as a pure artwork type, starting from the inflexible construction of a haiku to the fluid, unconstrained nature of free-verse poetry. In analysing these works, although, to what extent can arithmetic and Data Analysis be used to glean that means from this free-flowing literature? In fact, rhetoric could be analysed, references could be discovered, and phrase selection could be questioned, however can the underlying– even unconscious– thought technique of an creator be discovered utilizing analytic ways on literature? As an preliminary exploration into compute-assisted literature evaluation, we’ll try to make use of a Fourier remodeling program to seek for periodicity in a poem. To check our code, we’ll use two case research: “Do Not Go Gentle into That Good Night” by Dylan Thomas, adopted by Lewis Carroll’s “Jabberwocky.” 

    1. Knowledge acquisition

    a. Line splitting and phrase depend

    Earlier than doing any calculations, all mandatory information have to be collected. For our functions, we’ll desire a information set of the variety of letters, phrases, syllables, and visible size of every line. First, we have to parse the poem itself (which is inputted as a plain textual content file) into substrings of every line. That is fairly simply finished in Python with the .break up() methodology; passing the delimiter “n” into the tactic will break up the file by line, returning a listing of strings for every line. (The complete methodology is poem.break up(“n”)).  Counting the variety of phrases is so simple as splitting the strains, and follows properly from it: first, iterating throughout all strains, apply the .break up() methodology once more– this time with no delimiter– so that it’s going to default to splitting on whitespaces, turning every line string into a listing of phrase strings. Then, to depend the variety of phrases on any given line merely name the built-in len() perform on every line; since every line has been damaged into a listing of phrases, len() will return the variety of objects within the line checklist, which is the phrase depend. 

    b. Letter depend

    To calculate the variety of letters in every line, all we have to do is take the sum of the letter depend of every phrase, so for a given line we iterate over every phrase, calling len()to get the character depend of a given phrase. After iterating over all phrases in a line, the characters are summed for the entire variety of characters on the road; the code to carry out that is sum(len(phrase) for phrase in phrases).  

    c. Visible size

    Calculating the visible size of every line is easy; assuming a monospace font, the visible size of every line is solely the entire variety of characters (together with areas!) current on the road. Due to this fact, the visible size is solely len(line). Nevertheless, most fonts usually are not monospace, particularly frequent literary fonts like Caslon, Garamond, and Georgia — this presents a difficulty as a result of with out figuring out the precise font that an creator was writing with, we will’t calculate the exact line size. Whereas this assumption does go away room for error, contemplating the visible size in some capability is essential, so the monospace assumption should be used. 

    d. Syllable depend

    Getting the syllable depend with out manually studying every line is essentially the most difficult a part of information assortment. To establish a syllable, we’ll use vowel clusters. Observe that in my program I outlined a perform, count_syllables(phrase), to depend the syllables in every phrase. To preformat the phrase, we set it to all lowercase utilizing phrase = phrase.decrease() and take away any punctuation which may be contained within the phrase utilizing phrase = re.sub(r'[^a-z]', '', phrase). Subsequent, discover all vowels or vowel clusters– every must be a syllable, as a single syllable is expressly outlined as a unit of pronunciation containing one steady vowel sound surrounded by consonants. To search out every vowel cluster, we will use the regex of all vowels, together with y: syllables = re.findall(r'[aeiouy]+', phrase). After defining syllables, will probably be a listing of all vowel clusters in a given phrase. Lastly, there have to be a minimum of one syllable per phrase, so even if you happen to enter a vowelless phrase (Cwm, for instance), the perform will return one syllable. The perform is:

    def count_syllables(phrase):
        """Estimate syllable depend in a phrase utilizing a easy vowel-grouping methodology."""
        phrase = phrase.decrease()
        phrase = re.sub(r'[^a-z]', '', phrase)  # Take away punctuation
        syllables = re.findall(r'[aeiouy]+', phrase)  # Discover vowel clusters
        return max(1, len(syllables))  # A minimum of one syllable per phrase

    That perform will return the depend of syllables for any inputted phrase, so to search out the syllable depend for a full line of textual content, return to the earlier loop (used for information assortment in 1.a-1.c), and iterate over the phrases checklist which is able to return the syllable depend in every phrase. Summing the syllable counts will give the depend for the total line: num_syllables = sum(count_syllables(phrase) for phrase in phrases). 

    e. Knowledge assortment abstract

    The info assortment algorithm is compiled right into a single perform, which begins at splitting the inputted poem into its strains, iterates over every line of the poem performing the entire beforehand described operations, and appends every information level to a delegated checklist for that information set, and eventually generates a dictionary to retailer all information factors for a single line and appends it to a grasp information set. Whereas the time complexity is successfully irrelevant for the small quantities of enter information getting used, the perform runs in linear time, which is useful within the case that it’s used to investigate massive quantities of knowledge. The info assortment perform in its entirety is:

    def analyze_poem(poem):
        """Analyzes the poem line by line."""
        information = []
        strains = poem.break up("n")
    
        for line in strains:
            phrases = line.break up()
            num_words = len(phrases)
            num_letters = sum(len(phrase) for phrase in phrases)
            visual_length = len(line)  # Approximate visible size (monospace)
            num_syllables = sum(count_syllables(phrase) for phrase in phrases)
            phrase.append(num_words)
            letters.append(num_letters)
            size.append(visual_length)
            sylls.append(num_syllables)
    
            information.append({
                "line": line,
                "phrases": num_words,
                "letters": num_letters,
                "visual_length": visual_length,
                "syllables": num_syllables
            })
    
        return information
    

    2. Discrete Fourier rework 

    Preface: This part assumes an understanding of the (discrete) Fourier Transform; for a comparatively temporary and manageable introduction, attempt this article by Sho Nakagome.

    a. Particular DFT algorithm

    To handle with some specificity the actual DFT algorithm I’ve used, we have to contact on the NumPy quick Fourier rework methodology. Suppose N is the variety of discrete values being remodeled: If N is an influence of two, NumPy makes use of the radix-2 Cooley-Tukey Algorithm, which recursively splits the enter into even and odd indices. If N just isn’t an influence of two, NumPy applies a mixed-radix method, the place the enter is factorized into smaller prime components, and FFTs are computed utilizing environment friendly base circumstances. 

    b. Making use of the DFT

    To use the DFT to the beforehand collected information, I’ve created a perform fourier_analysis, which takes solely the grasp information set (a listing of dictionaries with all information factors for every line) as an argument. Fortunately, since NumPy is so adept at arithmetic, the code is easy. First, discover N, being the variety of information factors to be remodeled; that is merely N = len(information). Subsequent, apply NumPy’s FFT algorithm to the info utilizing the tactic np.fft.fft(information), which returns an array of the advanced coefficients representing the amplitude and part of the Fourier sequence. Lastly, the np.abs(fft_result) methodology extracts the magnitudes of every coefficient, representing its energy within the unique information. The perform returns the Fourier magnitude spectrum as a listing of frequency-magnitude pairs.

    def fourier_analysis(information):
        """Performs Fourier Rework and returns frequency information."""
        N = len(information)
        fft_result = np.fft.fft(information)  # Compute Fourier Rework
        frequencies = np.fft.fftfreq(N)  # Get frequency bins
        magnitudes = np.abs(fft_result)  # Get magnitude of FFT coefficients
    
        return checklist(zip(frequencies, magnitudes))  # Return (freq, magnitude) pairs

    The complete code could be discovered right here, on GitHub.

    3. Case research

    a. Introduction

    We’ve made it by way of the entire code and tongue-twister algorithms, it’s lastly time to place this system to the take a look at. For the sake of time, the literary evaluation finished right here will likely be minimal, placing the stress on the info evaluation. Observe that whereas this Fourier rework algorithm returns a frequency spectrum, we wish a interval spectrum, so the connection ( T = frac{1}{f} ) will likely be used to acquire a interval spectrum. For the aim of evaluating completely different spectrums’ noise ranges, we’ll be utilizing the metric of signal-to-noise ratio (SNR). The common sign noise is calculated as an arithmetic imply, given by ( P_{noise} = frac{1}{N-1} sum_{ok=0}^{N-1} |X_k| ), the place ( X_k ) is the coefficient for any index ( ok ), and the sum excludes ( X_{peak} ), the coefficient of the sign peak. To search out the SNR, merely take ( frac{X_{peak}}{P_{noise}} ); a better SNR means a better SNR means a better SNR means a better sign energy relative to background noise. SNR is a robust selection for detecting poetic periodicity as a result of it quantifies how a lot of the sign (i.e., structured rhythmic patterns) stands out towards background noise (random variations in phrase size or syllable depend). In contrast to variance, which measures general dispersion, or autocorrelation, which captures repetition at particular lags, SNR instantly highlights how dominant a periodic sample is relative to irregular fluctuations, making it excellent for figuring out metrical constructions in poetry.

    b. “Do Not Go Light into That Good Night time” – Dylan Thomas

    This work has a particular and visual periodic construction, so it’s nice testing information. Sadly, the syllable information right here received’t discover something attention-grabbing right here (Thomas’s poem is written in iambic pentameter); the phrase depend information, however, has the best SNR worth out of any of the 4 metrics, 6.086. 

    Determine 1. Observe that this determine and all that comply with have been generated utilizing Google Sheets.

    The spectrum above reveals a dominant sign at a 4 line interval, and comparatively little noise within the different interval ranges. Moreover, contemplating its highest SNR worth in comparison with letter-count, syllable-count, and visible size provides an attention-grabbing commentary: the poem follows a rhyme scheme of ABA(clean); this implies the phrase depend of every line repeats completely in tandem with the rhyme scheme. The SNRs of the opposite two related spectrums usually are not far behind the word-count SNR, with the letter-count at 5.724 and the visible size at 5.905. These two spectrums even have their peaks at a interval of 4 strains, indicating that in addition they match the poem’s rhyme scheme. 

    c. “Jabberwocky” – Lewis Carroll

    Carrol’s writing can also be largely periodic in construction, however has some irregularities; within the phrase interval spectrum there’s a distinct peak at ~5 strains, however the significantly low noise (SNR = 3.55) is damaged by three distinct sub-peaks at 3.11 strains, 2.54 strains, and a pair of.15 strains. This secondary peak is proven in determine 2, implying that there’s a important secondary repeating sample within the phrases Carroll used. Moreover, as a result of growing nature of the peaks as they method a interval of two strains, one conclusion is that Carroll has a construction of alternating phrase counts in his writing.

    Determine 2.

    This alternating sample is mirrored within the interval spectrums of visible size and letter depend, each having secondary peaks at 2.15 strains. Nevertheless, the syllable spectrum proven in determine 3 reveals a low magnitude on the 2.15 line interval, indicating that the phrase depend, letter depend, and visible size of every line are correlated, however not the syllable depend. 

    Determine 3.

    Apparently, the poem follows an ABAB rhyme scheme, suggesting a connection between the visible size of every line and the rhyming sample itself. One doable conclusion is that Carroll discovered it extra visually interesting when writing for the rhyming ends of phrases to line up vertically on the web page. This conclusion, that the visible aesthetic of every line altered Carroll’s writing type, could be drawn earlier than ever studying the textual content.

    4. Conclusion

    Making use of Fourier evaluation to poetry reveals that mathematical instruments can uncover hidden constructions in literary works—patterns that will replicate an creator’s stylistic tendencies and even unconscious decisions. In each case research, a quantifiable relationship was discovered between the construction of the poem and metrics (word-count, and so on.) which are typically ignored in literary evaluation. Whereas this method doesn’t change conventional literary evaluation, it supplies a brand new solution to discover the formal qualities of writing. The intersection of arithmetic, pc science, information analytics and Literature is a promising frontier, and this is only one approach that expertise can result in new discoveries, holding potential in broader information science fields like stylometry, sentiment and emotion evaluation, and matter modeling.  []



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow AI-Driven Personalization is Reshaping Customer Experience | by Zuk Technologies | Mar, 2025
    Next Article 3 Books That Made Me 6 Figures — Part 2
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025
    Artificial Intelligence

    Implementing IBCS rules in Power BI

    July 1, 2025
    Artificial Intelligence

    Become a Better Data Scientist with These Prompt Engineering Tips and Tricks

    July 1, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Cuba’s Energy Crisis: A Systemic Breakdown

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    A Machine’s Prayer. By ChatGPT | With Reflection by Rhea… | by PetCat | Jun, 2025

    June 22, 2025

    Don’t Bank on Holiday Sales to Save Your Bottom Line — Here’s How to Engage the Discount-Driven Customer Year-Round

    December 20, 2024

    TikTok, RedNote and the Crushed Promise of the Chinese Internet

    January 20, 2025
    Our Picks

    Cuba’s Energy Crisis: A Systemic Breakdown

    July 1, 2025

    AI Startup TML From Ex-OpenAI Exec Mira Murati Pays $500,000

    July 1, 2025

    STOP Building Useless ML Projects – What Actually Works

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.