Close Menu
    Trending
    • Finding the right tool for the job: Visual Search for 1 Million+ Products | by Elliot Ford | Kingfisher-Technology | Jul, 2025
    • How Smart Entrepreneurs Turn Mid-Year Tax Reviews Into Long-Term Financial Wins
    • Become a Better Data Scientist with These Prompt Engineering Tips and Tricks
    • Meanwhile in Europe: How We Learned to Stop Worrying and Love the AI Angst | by Andreas Maier | Jul, 2025
    • Transform Complexity into Opportunity with Digital Engineering
    • OpenAI Is Fighting Back Against Meta Poaching AI Talent
    • Lessons Learned After 6.5 Years Of Machine Learning
    • Handling Big Git Repos in AI Development | by Rajarshi Karmakar | Jul, 2025
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Artificial Intelligence»Detecting Malicious URLs Using LSTM and Google’s BERT Models
    Artificial Intelligence

    Detecting Malicious URLs Using LSTM and Google’s BERT Models

    Team_AIBS NewsBy Team_AIBS NewsMay 28, 2025No Comments20 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    The rise of cybercrime has made fraudulent webpage detection an important job in guaranteeing that the web is secure. It’s evident that these dangers, such because the theft of personal info, malware, and viruses, are related to on-line actions on emails, social media functions, and web sites. These net threats, referred to as malicious URLs, are utilized by cybercriminals to lure customers to go to net pages that seem actual or official.

    This paper explores the event of a deep studying system involving a transformer algorithm to detect malicious URLs with the goal of enhancing an present technique comparable to Lengthy Quick-Time period Reminiscence (LSTM). (Devlin et al., 2019) launched a Pure language modelling algorithm (BERT) developed by Google Mind in 2017. This mannequin is able to making extra correct predictions to outperform the recurrent neural community methods comparable to Lengthy Quick Time period Reminiscence (LSTM) and Gated Recurrent Items (GRU). On this venture, I in contrast the BERT’s efficiency with LSTM as a textual content classification approach. With the processed dataset containing over 600,000 URLs, a pre-trained mannequin is developed, and outcomes are in contrast utilizing efficiency metrics comparable to r2 rating, accuracy, recall, and so forth. (Y. E. Seyyar et al., 2022). This LSTM algorithm achieved an accuracy charge of 91.36% and an F1 rating of 0.90 (increased than BERT’s) within the classification by way of each uncommon and customary requests. Key phrases: Malicious URLs, Lengthy Quick Time period Reminiscence, phishing, benign, Bidirectional Encoder Representations from Transformers (BERT).

    1.0 Introduction

    With the usability of the Net by means of the Web, there was an rising variety of customers over time. As all digital units are related to the web, this has additionally resulted in an rising variety of phishing threats by means of web sites, social media, emails, functions, and so forth. (Morgan, S., 2024) reported that greater than $9.5 trillion was misplaced globally because of leaks of personal info.

    Subsequently, revolutionary approaches have been launched over time to automate the duty of guaranteeing safer web utilization and information safety. The Symantec 2016 Web Safety Report (Vanhoenshoven et al., 2016) exhibits scammers have triggered most cyber-attacks involving company information breaches on browsers and web sites, in addition to different sheer malware makes an attempt utilizing the Uniform Useful resource Locator by baiting customers.

    Construction of a URL (Picture by writer)

    Lately, blacklisting, reputation-based methods, and machine studying algorithms have been utilized by cybersecurity professionals to enhance malware detection and make the net safer. Google’s statistics reported that over 9,500 suspicious net pages are blacklisted and blocked per day. The existence of those malicious net pages represents a big threat to the data safety of net functions, significantly people who cope with delicate information (Sankaran et al., 2021). As a result of it’s really easy to implement, blacklisting has develop into the usual method. The false-positive charge can be considerably lowered with this technique. The issue, nevertheless, is that it’s extraordinarily tough to maintain an in depth checklist of malicious URLs updated, particularly contemplating that new URLs are usually created day by day. So as to circumvent filters and trick customers, cybercriminals have provide you with ingenious strategies, comparable to obfuscating the URL so it seems to be actual. This subject of Artificial Intelligence (AI) has seen important developments and functions in quite a lot of domains, together with cybersecurity. One essential facet of cybersecurity is detecting and stopping malicious URLs, which may end up in severe penalties comparable to information breaches, id theft, and monetary losses. Given the dynamic and ever-changing nature of cyber threats, detecting malicious URLs is a tough job.

    This venture goals to develop a deep studying system for textual content classification referred to as Malicious URL Detection utilizing pre-trained Bidirectional Encoder Representations from Transformers (BERT). Can the BERT mannequin outperform present strategies in malicious URL detection? The anticipated final result of this examine is to display the effectiveness of the BERT mannequin in detecting malicious URLs and evaluate its efficiency with recurrent neural community strategies comparable to LSTM. I used analysis metrics comparable to accuracy, precision, recall, and F1-score to check the fashions’ efficiency.

    2.0. Background

    Machine studying strategies like Random Forest and Multi-Layer Notion, Help Vector Machines, and deep studying strategies like LSTM and different CNN are only a few of the strategies proposed by the present literature for detecting dangerous URLs. Nonetheless, there are drawbacks to those strategies, comparable to the truth that they necessitate conventional options, as they’re unable to cope with complicated information, thereby leading to overfitting.

    2.1. Associated works

    To enhance the time for acquiring the web page content material or processing the textual content, (Kan and Thi, 2005) used a way of categorising web sites primarily based on their URLs. Classification options have been collected from the URL after it was parsed into a number of tokens. Token dependencies in time order have been modelled by the traits. They concluded that the classification charge elevated when high-quality URL segmentation was mixed with characteristic extraction. This strategy paved the best way for different analysis on creating complicated deep studying fashions for textual content classification. As a binary textual content classification drawback, (Vanhoenshoven et al., 2016) developed fashions for the detection of malicious URLs and evaluated the efficiency of classifiers, together with Naive Bayes, Help Vector Machines, Multi-Layer Perceptron, and so forth. Subsequently, textual content embedding strategies implementing transformers have produced state-of-the-art leads to NLP duties. An identical mannequin was devised by (Maneriker et al., 2021), during which they pre-trained and refined an present transformer structure utilizing solely URL information. The URL dataset included 1.29 million entries for coaching and 1.78 million entries for testing. Initially, the BERT structure supported the masked language modelling framework, which might not be obligatory on this report.

    For the classification course of, the BERT and RoBERTa algorithms have been fine-tuned, and outcomes have been evaluated and in comparison with suggest a mannequin referred to as URLTran (URL Transformers) that makes use of transformers to considerably enhance the efficiency of malicious URL detection with very low false optimistic charges compared to different deep studying networks. With this technique, the URLTran mannequin achieved an 86.8% true optimistic charge (TPR) in comparison with the most effective baseline’s TPR of 71.20%, leading to an enchancment of 21.9%. This talked about technique was capable of classify and predict whether or not a detected URL is benign or malicious.

    Moreover, an RNN-based mannequin was proposed by (Ren et al, 2019) the place extracted URLs have been transformed into phrase vectors (characters) through the use of pre-trained Word2Vec, and Bi-LSTM (bi-directional lengthy short-term reminiscence) and classifying them. After validation and analysis, the mannequin achieved 98% accuracy and an F1 rating of 95.9%. This mannequin outperformed nearly all the NLP strategies however solely processed textual content characterization one by one. Nonetheless, there’s a must develop an improved mannequin utilizing BERT to course of sequential enter all of sudden. Though these fashions have demonstrated some enchancment with huge information, they don’t seem to be with out their limitations. The sequential nature of textual content information, as an example, could also be tough with RNNs, whereas CNNs most occasions don’t seize long-term dependencies within the information (Alzubaidi et al., 2021). As the quantity and complexity of textual information on the net proceed to extend, it’s potential that present fashions will develop into insufficient.

    3.0. Aims

    This venture offered the significance of a bidirectional pre-trained mannequin for textual content classification. (Radford et al., 2018) carried out unidirectional language fashions to pre-train BERT. In comparison with this, a shallow concatenation of independently skilled left-to-right and right-to-left linear fashions was created (Devlin et al., 2019; Peters et al., 2018). Right here, I used a pre-trained BERT mannequin to realize state-of-the-art efficiency on a big scale of sentence-level and token-level duties (Han et al., 2021) with the goal to outperform many RNNs architectures, thereby lowering the necessity for these frameworks. On this case, the hyper-parameters of the LSTM algorithm won’t be fine-tuned.

    Particularly, this analysis paper emphasises:

    1. Growing an LSTM and pre-trained BERT fashions to detect (classify) whether or not a URL is unsafe or not.
    2. Evaluating outcomes of the bottom mannequin (LSTM) and pre-trained BERT utilizing analysis metrics comparable to recall, accuracy, F1 rating, precision. This may assist to find out if the bottom mannequin efficiency is best or not.
    3. BERT robotically learns latent illustration of phrases and characters in context. The one job is to fine-tune the BERT mannequin to enhance the baseline efficiency. This proposes a computationally easy strategy to RNNs as an alternative choice to the extra resource-intensive, and computationally costly architectures.
    4. Evaluation and mannequin growth and analysis took about 7 weeks and the goal was to realize a considerably diminished coaching runtime with Google’s BERT mannequin.

    4.0. Methodology

    This part explains all of the processes concerned in implementing a deep studying system for detecting malicious URLs. Right here, a transformer-based framework was developed from an NLP sequence perspective (Rahali and Akhloufi, 2021) and used to statistically analyse a public dataset.

    Determine 4.0. Methodology Course of (Tailored from Rahali and Akhloufi, 2021)

    4.1. The dataset

    The dataset used for this report was compiled and extracted from Kaggle (license info). This dataset was ready to hold out the classification of webpages (URLs) as malicious or benign. The datasets consisting of URL entries for coaching, validation and testing have been collected.

    Picture by writer (code visualisation)

    To research the info utilizing deep studying fashions, an enormous dataset of 651,191 URL entries have been retrieved from Phishtank, PhishStorm, and malware area blacklist. It incorporates:

    • Benign URLs: These are the secure net pages to browse. Precisely 428,103 entries have been identified to be safe.
    • Defacement URLs: These webpages are utilized by cybercriminals or hackers to clone actual and safe web sites. These include 96,457 URLs.
    • Phishing URLs: They’re disguised as real hyperlinks to trick customers to offer private and delicate info which dangers the lack of funds. 94,111 entries of the entire dataset have been flagged as phishing URLs.
    • Malware URLs: They’re designed to control customers to obtain them as software program and functions, thereby exploiting vulnerabilities. There are 32,520 malware webpage hyperlinks within the dataset.
    Desk 4.1. The forms of URLs and their fraction of the dataset (Picture by writer)

    4.2. Characteristic extraction

    For the URL dataset, characteristic extraction was used to rework uncooked enter information right into a format supported by machine studying algorithms (Li et al., 2020). It converts categorical information into numerical options, whereas characteristic choice selects a subset of related options from the unique dataset (Sprint and Liu, 1997; Tang and Liu, 2014).
    View information evaluation and mannequin growth file here. The next steps have been taken:

    1. Combining the phishing, malware and defacement URLs as Malicious URL varieties for higher choice. The entire URLs are then labelled benign or malicious.

    2. Changing the URL varieties from categorical variables into numerical values. It is a essential course of as a result of the deep studying mannequin coaching requires solely numerical values. Benign and phishing URLs are categorized as 0 and 1, respectively, and handed into a brand new column referred to as “Class”.

    3. The ‘url_len’ characteristic was used to compute the URL size to extract options from the URLs within the dataset. Utilizing the ‘process_tld’ operate, the top-level area (TLD) of every URL was extracted.

    4. Some potential options for URL classification embrace the presence of particular characters [‘@’, ‘?’, ‘-‘, ‘=’, ‘.’, ‘#’, ‘%’, ‘+’, ‘$’, ‘!’, ‘*’, ‘,’, ‘//’] have been represented and added as columns to the dataset utilizing the ‘abnormal_url’ characteristic. This characteristic (operate) makes use of binary classification to confirm if there are abnormalities in each URL character. 5. One other choice was achieved on the dataset such because the variety of characters (letters and counts), https, shorting service and ip deal with of all entries. These present extra info for coaching the mannequin.

    4.3. Classification – mannequin growth and coaching

    Utilizing pre-labelled options, the coaching information learns the affiliation between labels and textual content. This stage includes figuring out the URL varieties within the dataset. As an NLP approach, it’s required to assign texts (phrases) into sentences and queries (Minaee et al, 2021). A recurrent neural community mannequin structure defines an optimised mannequin. To make sure a balanced dataset, the info was break up into 80% coaching set and a 20% testing set. The texts have been labelled utilizing phrase embeddings for each the LSTM and the pre-trained BERT fashions. The dependent variables embrace the encoded URL varieties (Classes) contemplating it’s an computerized binary classification.

    4.3.1. Lengthy short-term reminiscence mannequin

    LSTM was discovered to be the preferred structure due to its capacity to seize long-term dependencies utilizing word2vec (Mikolov et al, 2013) to coach on billions of phrases. After preprocessing and have extraction, the info was arrange for the LSTM mannequin coaching, testing and validation. To find out the suitable sequence size, the quantity and dimension of layers (enter and output layers) have been proposed earlier than coaching the mannequin. The hyperparameters comparable to epoch, studying charge, batch dimension, and so forth. have been tuned to realize optimum efficiency.

    The reminiscence cell of a typical LSTM unit has three gates (enter gate, neglect gate, and output gate) (Feng et al, 2020). Opposite to a “feedforward neural community, the output of a neuron” at any time may be the identical neutron because the enter (Do et al, 2021). To forestall overfitting, a dropout operate is carried out on a number of layers one after the opposite. The primary layer added is an embedding layer, which is used to create dense vector representations of phrases within the enter textual content information. Nonetheless, just one LSTM layer was used on this structure because of the lengthy coaching time.

    4.3.2. BERT mannequin

    Researchers proposed BERT structure for NLP duties as a result of it has increased total efficiency than RNNs and LSTM. A pre-trained BERT mannequin was carried out on this venture to course of textual content sequences and seize the semantic info of the enter, which will help enhance and cut back coaching time and accuracy of malicious URL detection. After the URL information was pre-processed, they have been transformed into sequences of tokens after which feeding these sequences into the BERT mannequin for processing (Chang et al., 2021). As a consequence of massive information entries on this venture, the BERT mannequin was fine-tuned to study the related options of every kind of URL. As soon as the mannequin is skilled, it was used to categorise URLs as malicious (phishing) or benign with improved accuracy and efficiency.

    Google’s BERT model architecture (Tune et al, 2021)

    (Determine 4.3.2) describes the processes concerned in mannequin coaching with the BERT algorithm. A tokenization section is required for splitting textual content into characters. Initially, uncooked textual content is separated into phrases, that are then transformed to distinctive integer IDs through a
    lookup desk. WordPiece tokenization (Tune et al, 2021) was carried out utilizing the BertTokenizer class. The tokenizer contains the BERT token splitting algorithm and a WordPieceTokenizer (Rahali and Akhloufi, 2023). It accepts phrases (sentences) as enter and outputs token IDs.

    5.0. Experiments

    Particular hyper-parameters have been used for BERT, whereas an LSTM mannequin with a single hidden layer was tuned primarily based on efficiency on the validation set. As a consequence of an unbalanced dataset, solely 522,214 entries have been parsed consisting of 417,792 coaching information and 104,422 testing information with a train-test break up of 70% to 30%.

    The parameters used for coaching are described under:

    Desk 5.0. Hyperparameters used within the Keras library for the LSTM and BERT fashions (Picture by writer)

    5.1. LSTM (baseline)

    The outcomes indicated a corresponding dropout charge of 0.2 and batch dimension 1024 to realize a coaching accuracy of 91.23% and validation accuracy of 91.36%. Just one LSTM layer was used within the structure because of lengthy coaching time (common of 25.8 minutes). Nonetheless, including extra layers to the neural community leads to a excessive
    computation drawback, thereby lowering the mannequin’s total efficiency.

    LSTM algorithm experiment setup (Do et al, 2021)

    5.2. Pre-trained BERT mannequin

    This mannequin was tokenized however the downside was the classifier couldn’t initialize at checkpoint. Subsequently, some layers have been affected. This mannequin requires additional sequence classification earlier than pre-training. The expectations weren’t met because of complicated computation. Nonetheless, it was proposed to have glorious efficiency.

    6.0. Outcomes

    An experimental final result is evaluated for the 2 fashions developed utilizing efficiency metrics. These metrics are to point out how properly the check information carried out on the fashions. They’re offered to guage the proposed strategy’s effectiveness in detecting malicious net pages.

    6.1. Efficiency Metrics

    To judge the efficiency of the proposed metrics, a confusion matrix was used because of its analysis measures.

    Desk 6.1 Binary classification of precise and predicted outcomes
    • True Optimistic (TP): samples which might be precisely predicted malicious (phishing) (Amanullah et al., 2020).
    • True Destructive (TN): samples which might be precisely predicted as benign URLs.
    • False Optimistic (FP): samples which might be incorrectly predicted as phishing URLs.
    • False Destructive (FN): situations which might be incorrectly predicted as benign URLs.
      Accuracy = (TP + TN) / (TP + TN + FP + FN)
      Precision = TP / (TP + FP)
      Recall = TP / (TP + FN)
      F1-score = (2 × Precision × Recall) / (Precision + Recall)
    Desk 6.2. Classification report for the developed fashions (Picture by writer)

    The LSTM mannequin achieved an accuracy of 91.36% and a lack of 0.25, whereas the pre-trained BERT mannequin achieved a decrease accuracy (75.9%) than anticipated on account of {hardware} malfunction.

    6.2. Validation

    The LSTM carried out properly as a result of the validation information accuracy will detect malicious URLs 9 out of 10 occasions.

    Accuracy validation and loss validation (LSTM). Picture by writer

    Nonetheless, the pre-trained BERT couldn’t attain a better expectation because of unbalance and huge dataset.

    Confusion matrix for LSTM and BERT models (Picture by writer)

    7.0. Conclusion

    Total, LSTM fashions is usually a highly effective software for modelling sequential information and making predictions primarily based on temporal dependencies. Nonetheless, you will need to fastidiously take into account the character of the info and the issue at hand earlier than deciding to make use of an LSTM mannequin, in addition to to correctly arrange and tune the mannequin to realize the most effective outcomes. As a consequence of massive dataset, a rise batch dimension (1024) resulted in a shorter coaching time and improved the validation accuracy of the mannequin. This might be on account of not tokenizing the mannequin throughout coaching and testing. BERT’s most sequence size is 512 tokens, which could be inconvenient for some functions. If a sequence goes to be shorter than the restrict, tokens have to be added to it, in any other case, it ought to be to be truncated (Rahali and Akhloufi, 2021). Additionally, to know phrases and sentences higher, BERT wants modified embeddings to symbolize context in character. Though these capabilities carried out properly with complicated phrase embeddings, it may additionally end in longer coaching time when used with bigger datasets. Nonetheless, a necessity for additional additional analysis is required to detect patterns throughout malicious URL detection.

    References

    • Alzubaidi, L., Zhang, J., Humaidi, A. J., Duan, Y., Santamaría, J., Fadhel, M. A., & Farhan, L. (2021). Evaluate of deep studying: Ideas, CNN architectures, challenges, functions, future instructions. Journal of Huge Knowledge, 8(1), 1-74. https://doi.org/10.1186/s40537-021-00444-8
    • Amanullah, M. A., Habeeb, R. A. A., Nasaruddin, F. H., Gani, A., Ahmed, E., Nainar, A. S. M., Akim, N. M., & Imran, M. (2020). Deep studying and massive information applied sciences for IoT safety. Pc Communications, 151, 495-517. https://doi.org/10.1016/j.comcom.2020.01.016
    • Chang, W., Du, F., and Wang, Y. (2021). “Analysis on Malicious URL Detection Expertise Based mostly on BERT Mannequin,” IEEE ninth Worldwide Convention on Data, Communication and Networks (ICICN), Xi’an, China, pp. 340-345, doi: 10.1109/ICICN52636.2021.9673860.
    • Sprint, M., & Liu, H. (1997). Characteristic choice for classification. Clever Knowledge Evaluation, 1(1-4), 131-156. https://doi.org/10.1016/S1088-467X(97)00008-5
    • Do, N.Q., Selamat, A., Krejcar, O., Yokoi, T. and Fujita, H. (2021). Phishing webpage classification through deep learning-based algorithms: an empirical examine. Utilized Sciences, 11(19), p.9210.
    • Devlin, J., Chang, M. W., Lee, Ok., & Toutanova, Ok. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
    • Feng, J., Zou, L., Ye, O., and Han, Han. (2020) “Web2Vec: Phishing Webpage Detection Technique Based mostly on Multidimensional Options Pushed by Deep Studying,” in IEEE Entry, vol. 8, pp. 221214-221224, doi: 10.1109/ACCESS.2020.3043188
    • Han, X., Zhang, Z., Ding, N., Gu, Y., Liu, X., Huo, Y., Qiu, J., Yao, Y., Zhang, A., Zhang, L., Han, W., Huang, M., Jin, Q., Lan, Y., Liu, Y., Liu, Z., Lu, Z., Qiu, X., Tune, R., . . . Zhu, J. (2021). Pre-trained fashions: Previous, current and future. AI Open, 2, 225- 250. https://doi.org/10.1016/j.aiopen.2021.08.002
    • Morgan, S. (2024). 2024 Cybersecurity Almanac: 100 Details, Figures, Predictions and Statistics. Cybersecurity Ventures. https://cybersecurityventures.com/2024-cybersecurity-almanac/ Kan, M-Y., and Thi, H. (2005). Quick webpage classification utilizing URL options. 325- 326. 10.1145/1099554.1099649.
    • Li, Q., Peng, H., Li, J., Xia, C., Yang, R., Solar, L., Yu, P.S. and He, L. (2020). A survey on textual content classification: From shallow to deep studying. arXiv preprint arXiv:2008.00364. Maneriker, P., Stokes, J. W., Lazo, E. G., Carutasu, D., Tajaddodianfar, F., & Gururajan, A. (2021). URLTran: Bettering Phishing URL Detection Utilizing Transformers. ArXiv. /abs/2106.05256
    • Mikolov, T., Sutskever, I., Chen, Ok., Corrado, G.S. and Dean, J. (2013). Distributed representations of phrases and phrases and their compositionality. Advances in neural info processing methods, 26.
    • Minaee, S., Kalchbrenner, N., Cambria, E., Nikzad, N., Chenaghlu, M. and Gao, J. (2021). Deep Studying–primarily based Textual content Classification. ACM Computing Surveys, 54(3), pp.1–40. doi:https://doi.org/10.1145/3439726.
    • Peters, M.E., Ammar, W., Bhagavatula, C. and Energy, R. (2017). Semi-supervised sequence tagging with bidirectional language fashions. arXiv:1705.00108 [cs]. [online] Obtainable at: https://arxiv.org/abs/1705.00108.
    • Radford, A., Narasimhan, Ok., Salimans, T. and Sutskever, I. (2018). Bettering Language Understanding by Generative Pre-Coaching. [online] Obtainable at: https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf.
    • Rahali, A. & Akhloufi, M. A. (2021) MalBERT: Utilizing transformers for cybersecurity and malicious software program detection. arXiv Preprint arXiv:2103.03806
    • Ren, F., Jiang, Z., & Liu, J. (2019). A Bi-Directional Lstm Mannequin with Consideration for Malicious URL Detection. 2019 IEEE 4th Superior Data Expertise, Digital and Automation Management Convention (IAEAC), 1, 300-305.
    • Sankaran, M., Mathiyazhagan, S., ., P., Dharmaraj, M. (2021). ‘Detection Of Malicious Urls Utilizing Machine Learning Methods’, Int. J. of Aquatic Science, 12(3), pp. 1980- 1989
    • Tune, X., Salcianu, A., Tune, Y., Dopson, D., and Zhou, D. (2020). Quick WordPiece Tokenization. ArXiv. /abs/2012.15524 Tang, J., Alelyani, S. and Liu, H. (2014). Characteristic choice for classification: A assessment. Knowledge classification: Algorithms and functions, p.37.
    • Vanhoenshoven, F., Napoles, G., Falcon, R., Vanhoof, Ok. & Koppen, M. (2016) Detecting malicious URLs utilizing machine studying strategies. IEEE.
    • Y. E. Seyyar, A. G. Yavuz and H. M. Ünver. (2022) An assault detection framework primarily based on BERT and deep studying. IEEE Entry, vol. 10, pp. 68633-68644, 2022, doi: 10.1109/ACCESS.2022.3185748.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleDesigning Pareto-optimal GenAI workflows with syftr
    Next Article Grandma’s Recipe Started Business With $2B+ Annual Revenue
    Team_AIBS News
    • Website

    Related Posts

    Artificial Intelligence

    Become a Better Data Scientist with These Prompt Engineering Tips and Tricks

    July 1, 2025
    Artificial Intelligence

    Lessons Learned After 6.5 Years Of Machine Learning

    July 1, 2025
    Artificial Intelligence

    Prescriptive Modeling Makes Causal Bets – Whether You Know it or Not!

    June 30, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Finding the right tool for the job: Visual Search for 1 Million+ Products | by Elliot Ford | Kingfisher-Technology | Jul, 2025

    July 1, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Tench Coxe Is Now an Nvidia Billionaire, Like Jensen Huang

    January 3, 2025

    how rag is changing the game in nlp | by Sirine Amrane | Mar, 2025

    March 3, 2025

    How Elon Musk Uses Internet Slang to Marshal His Army of Online Fans

    March 2, 2025
    Our Picks

    Finding the right tool for the job: Visual Search for 1 Million+ Products | by Elliot Ford | Kingfisher-Technology | Jul, 2025

    July 1, 2025

    How Smart Entrepreneurs Turn Mid-Year Tax Reviews Into Long-Term Financial Wins

    July 1, 2025

    Become a Better Data Scientist with These Prompt Engineering Tips and Tricks

    July 1, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.