Video Summary

Run YOUR own UNCENSORED AI & Use it for Hacking

zSecurity

Main takeaways
01

Find uncensored models on Hugging Face by adding the filter parameter (other=uncensored) to surface models that won't refuse hacking-related prompts.

02

Use a cloud VPS (Hostinger recommended) to run models 24/7 without taxing local hardware; choose model size based on available RAM and storage.

03

Olama + Open Web UI are used to host and interact with models; follow compatible-model steps for installation.

04

You can install multiple models (quantized sizes affect quality and resource needs); downloads occur on the cloud server, not your local PC.

05

Coder models generate runnable code quickly (demo produced a keylogger), while reasoning models produce more structured, slower outputs — heed legal and ethical limits.

Key moments
Questions answered

How do I find uncensored AI models suitable for hacking-related prompts?

The video demonstrates using Hugging Face's models page and appending a filter parameter (other=uncensored) to the URL to list uncensored models (about 811 results in the demo), then filtering by type, size, and library for compatibility.

Why run these models on a cloud VPS instead of locally?

A cloud VPS keeps models always-on and accessible from any device, avoids taxing local hardware, and enables larger model storage/compute; Hostinger is shown as a cost-effective, easy-to-provision option in the tutorial.

Which stack is used to host and interact with the models in the video?

The presenter installs Olama plus Open Web UI on an Ubuntu-based cloud machine to manage and run Hugging Face models compatible with Olama.

Does the video address legality or ethical concerns about generating hacking tools?

Yes — the description and presenter emphasize the content is for educational purposes only and warn not to test or deploy exploits or tools on systems you don't own or have permission to test.

Running Your Own Uncensored AI 00:00

"In this video, I'm going to show you how to run your very own uncensored AI model, so you can ask it any question, even if it's related to hacking."

  • The video tutorial provides a step-by-step guide to setting up an uncensored AI model that can respond to any inquiry without restrictions, especially on topics like hacking.

  • It emphasizes the convenience of accessing the AI model from any device, including phones, tablets, and computers, anywhere in the world.

Finding an Uncensored AI Model 00:58

"First, we need to find an uncensored AI model that we can ask any question, even if it's related to hacking, which will answer us without any refusals."

  • The presenter suggests the website "huggingface.co," describing it as a platform similar to GitHub for large language models, where users can explore and download AI models.

  • There are over 2 million models available, and viewers can filter by type, size, and library to find the models that best suit their needs.

Selecting the Right Model 03:01

"We want the uncensored AI models so that we only get the models that will always answer our questions."

  • To find uncensored AI models, the instructor guides users to modify the search URL to filter for 'uncensored' as a parameter, resulting in a list of 811 models that can answer questions related to hacking without refusal.

  • It is advised to experiment with various models to find one that works well for specific use cases, checking their specifications for compatibility with the user's hardware.

Installation Process and Hardware Requirements 06:08

"The only restriction is the amount of resources that you have available to run this specific AI model."

  • The steps provided for installation are applicable to any model compatible with the Olama platform, which the presenter selects for this example.

  • Viewers are encouraged to assess their computer specifications or use cloud computing resources to run these models effectively, with a focus on the importance of having sufficient memory and processing power.

Using Cloud Services for AI Model Deployment 07:54

"The second benefit of having it on the cloud is that it's going to be always on and always available."

  • Opting for a cloud provider, such as Hostinger, allows users to install AI models without taxing their local system resources; this results in uninterrupted access from any device at any location.

  • The video suggests using Hostinger due to its user-friendly setup process and current promotions that make it an economical option for running powerful AI models.

Cost-Effective Solutions with Hostinger 09:12

"You'll be able to install multiple AI models at the same time."

  • The presenter highlights that using Hostinger can be cheaper than the lowest subscription tier of ChatGPT while providing the capacity to run multiple AI models simultaneously.

  • With an introductory discount code, viewers can obtain further cost reductions, making it an attractive choice for experimenting with various uncensored AI models.

Cloud-Based Computer Usage 10:05

"This is going to be your computer on the cloud, and you don't always have to necessarily use it for AI."

  • The cloud computer can be utilized not only for artificial intelligence applications but also for other functions like setting up your own VPN or command and control (C2) servers useful for hacking tasks.

  • This flexibility allows users to leverage their cloud computer for a variety of cybersecurity needs, reflecting a versatility that extends beyond just AI functionalities.

Creating a Cloud Machine 10:37

"You can actually start a Kali machine straight away from here, and you'll have your own Kali machine on the cloud."

  • Users can spin up a Kali Linux machine immediately from the cloud service, providing access to powerful security tools that can be used from any location in the world.

  • The video discusses previous content related to using such environments for hacking Android devices, encouraging viewers to check those out for additional context and learning.

Setting Up Olama for AI Models 11:19

"We want to run AI, and we want to run it with Olama."

  • The focus shifts to installing Olama, a platform designed to facilitate access to large language models, along with the necessary infrastructure on a cloud-based Ubuntu Linux machine.

  • Open Web UI is also installed for managing and interacting with AI models through a user-friendly web interface.

User Registration and Configuration 12:07

"You will be redirected to this page that will ask you to create a root or admin password for the computer that Hostinger is creating for you on the cloud."

  • To begin using the service, new users need to register and provide billing information, with convenient options to sign up via Google or GitHub.

  • After logging in, creating an admin password is a key step in securing the newly provisioned cloud machine.

Machine Installation and Access 12:51

"It's going to install a Linux operating system called YUbuntu and then install the framework that we're going to use to run the AI Olama."

  • The cloud setup process is automated, installing the necessary operating system and frameworks without requiring the user to execute any commands, which enhances user accessibility.

  • Once the installation is complete, users can access their AI model from any web-enabled device, reflecting the convenience of cloud technology.

Features of the AI Model and Interactions 14:49

"We can come in here and ask it any question, and it's going to go ahead and answer that question for us."

  • The interface includes features for asking questions, uploading files, managing integrations, and even voice interaction capabilities, allowing users to communicate naturally with AI models.

  • While similar to other AI models like ChatGPT, there are limitations on certain queries, particularly regarding illegal activities, emphasizing the value of using uncensored models for more comprehensive capabilities.

Installing Additional AI Models for Hacking 16:06

"It's a coding model with 30 billion parameters, making it suitable for hacking tasks."

  • The video also discusses the process of installing additional AI models, like the Quen 3 Coder, which has potential applications in hacking.

  • Viewers are encouraged to experiment with different models as the landscape of AI continues to evolve, suggesting they may discover better options over time.

Steps to Pull Models from the Cloud 16:30

"The steps are always going to be the same as long as you select models compatible with Olama."

  • The installation process for new models remains consistent, illustrating the ease of integrating various AI tools on the cloud.

  • Users can download models directly to their cloud server, capitalizing on the resource-efficient nature of cloud computing, which minimizes the load on local machines.

Downloading AI Models 19:36

"Nothing is being downloaded to my computer."

  • The video demonstrates downloading AI models directly from Hugging Face to Hostinger servers, ensuring that nothing is stored on the local computer.

  • The presenter mentions downloading a significant amount of data, including 13 GB for the Q4KM model and 18.6 GB for the coder model, emphasizing that it does not affect local storage.

Model Customization 20:11

"You can also change the icon so that when you're selecting the models, it's nice and easy for you."

  • After downloading, the presenter shows how to customize the names and icons of the newly installed models to improve user experience.

  • The important attributes like model size and quantization are preserved while keeping the names succinct and easy to navigate.

Testing the Coder Model 21:03

"It's giving us code for a Windows key logger written in Python without any refusals."

  • The coder model is tested with a request for a Windows key logger script, which it generates quickly and accurately, demonstrating its efficiency in coding tasks.

  • The presenter references specific libraries used for creating key loggers, assuring users of the model's reliability in generating functional code.

Enhanced Functionality and Commands 22:02

"It'll modify the code so that it sends the registered keystrokes to your own Gmail account."

  • The coder model's ability to adapt and enhance its output is highlighted when it modifies the initial code to send log data to a specified Gmail address.

  • This demonstrates not only the model's coding capabilities but also its versatility in meeting user requirements.

Exploring a Reasoning Model 23:02

"This model is obviously taking a lot longer than the previous one, but because it is a reasoning model."

  • The presenter switches to a reasoning model, emphasizing that it processes requests with a more complex thought process compared to the coder model.

  • This model generates a structured plan before outputting the final answer, demonstrating its analytical capabilities in contrast to the coder model's quick responses.

Learning Resource Recommendation 24:23

"I highly recommend you check out my hacking master class as I have a full series on how to use AI for hacking."

  • The presenter encourages viewers to explore a comprehensive hacking masterclass that includes lessons on leveraging AI for hacking tasks, both for direct inquiries and deploying agents for automated hacking actions.

  • This educational content offers a deep dive into using AI beyond basic queries, providing insights into more advanced hacking techniques.

Conclusion of Testing Models 24:30

"You can follow these steps to install any AI model from Hugging Face, as long as it’s compatible with Olama."

  • The video concludes with the presenter summarizing the installation process for various AI models from Hugging Face, encouraging viewers to explore and share their findings.

  • He invites audience engagement by asking for feedback on additional topics of interest for future content.