Artisan's AWS AMI is currently unavailable as we transition our deployment to a more reliable and versatile Docker image. Scheduled release date: Sep 3

Artisan CLI: Introduction

Artisan [primarily refers to a skilled craftsperson who creates high-quality items by hand], but in our case, Artisan is a CLI tool meant to expose your security automation gaps, help you craft a posture that enables 100% coverage, and autonomously & continuously checks your security automation posture to make sure your environment never has any security automation & response gaps.

Quick Start

For the public preview, we've bundled Artisan CLI in an AWS AMI image alongside a pre-installed instance of Splunk SOAR, to make testing and using the CLI easy and hassle-free.

Steps
  • Create EC2 instance from AMI with ID:
    ami-0b0fb4e17bc1eee94

    Run the command on the right or use AWS Console Dashboard to create your own EC2 instance from Artisan's AMI. You can view the video on the side for a quick tutorial on how to create an EC2 instance from an AMI.

  • Ensure security group allows traffic on port 8443

    Create a simple inbound rule for the security group you assigned to the EC2 instance, and make sure port 8443 can receive traffic.

  • Access the EC2 instance via SSH or EC2 Instance Connect

    Run su ec2-user to switch to user ec2-user

    Run /opt/phantom/bin/start_phantom.sh to start Splunk SOAR.

    Run cd /home/ec2-user/artisan to go to Artisan's directory.

    Ensure .env is configured as per your preferences. Artisan will throw errors if .env is not set up properly.

    Run ./artisan to run Artisan CLI interactive session.

  • Prepare your Splunk SOAR instance required

    Go here to read how to prepare your Splunk SOAR instance.

  • Edit .env file

    The .env file is located inside Artisan's directory, and here's where you can modify LLM_PROVIDER, LLM_API_KEY, LLM_MODEL, LLM_EXTRA_HEADERS_JSON, SOAR_IP, PH_AUTH_TOKEN

    You can also use your local LLM instance. Simply type custom in the LLM_PROVIDER field

    and leave LLM_API_KEY blank if your local inference doesn't require authentication. You will find more details and instructions written as comments inside .env.

    Read more here
  • Execute

    Run ./artisan -h to get a menu of arguments you can pass, or simply run ./artisan to start an interactive session.

command
EC2 instance from Artisan's AMI
aws ec2 run-instances \
  --image-id ami-0b0fb4e17bc1eee94 \
  --count 1 \
  --instance-type t2.medium \
  --key-name [your key pair name here] \
  --security-group-ids [your desired security group ID here]

Preparing Splunk SOAR

First, make sure that you've successfuly initiated Splunk SOAR as defined in the steps above. Now, you can access its web interface by visiting https://YOUR_SOAR_IP:8443. Your browser will warn you that the connection is not secure, however that's because by default Splunk SOAR is meant to run on-prem, and therefore communication is not secured. It's safe to proceed to the login page. The username is soar_local_admin and the password is soar.

For Artisan to authenticate its operations with your instance of Splunk SOAR REST API, you need to supply the authentication token from your Splunk SOAR web interface. You can find this by clicking Home -> Administration -> User Mangement -> Users and select the user automation

Copy the value on the key ph-auth-token and paste it in the PH_AUTH_TOKEN field in your .env file. Make sure that the field SOAR_IP has the same value you see on your Splunk SOAR REST authentication. It will look something like the one displayed here.

Authentication for REST API

{ "ph-auth-token": "*********", 
"server": "https://YOUR_SOAR_IP:8443" }

Reports and results

Artisan saves results for Security Automation Integrity Check under integrity_reports.json, located in the same directory as ./artisan.

Artisan saves results for Security Automation & Response Posture (SARP) Assesment under sarp_reports.json, located in the same directory as integrity_reports.json.

You can now ingest these JSON files into Splunk ES and customize your dashboard and key-value pairs.

Congratulations! Now you have a complete and in-depth view of your security automations and response posture and effectiveness.

.env

The .env file is a configuration file meant to provide you flexibility in what resources you wish Artisan to consume. It's primarily important for you to choose an LLM provider you trust, and that ideally you have under complete control.

There are no network requests that facilitate data egress. No data ever leaves your environment. Other than SOAR-related REST calls, there are only two network GET calls that are used for

1) Fetching tuning data for Artisan's ML model
2) The LLM network calls whose configuration you can define in the .env file.

.env Attributes
  • LLM_PROVIDER string required

    Here you can define your preferred LLM provider.

  • LLM_API_KEY string

    This field is necessary if your LLM provider requires a key for authentication.

  • LLM_MODEL string

    If your LLM provider supports it, you can define a specific model to be used during Artisan's LLM inferences.

  • LLM_EXTRA_HEADERS_JSON string

    Define extra data to be carried in the header of the network request made to your LLM provider.

  • LLM_CONNECT_TIMEOUT int

    Determine a period of time in seconds, within which if there's no response from the LLM inference request, it safely continues to the next operation.

  • SOAR_IP string required

    This field hosts the IP address of your Splunk SOAR instance.

  • PH_AUTH_TOKEN string required

    This field hosts the authentication token that authenticates Artisan's operations with Splunk SOAR REST API.

  • PROMPT_URL string required

    This field hosts the URL of Artisan's REST API prompt endpoint. It returns a specialized prompt unique for the particular operation being carried at that moment.

  • PROMPT_API_KEY string required

    This is how we validate you as a user. You need this API key to pull prompt data from Artisan.

  • PROMPT_TIMEOUT int

    Determines a period of time in seconds, within which if there's no response from Artisan's prompt endpoint, it safely stops execution.

POST
/v1/ava/endpoint/ids
curl https://api.stripe.com/v1/credit_notes/cn_1FWqvi2eZvKYlo2CZd3TwT6n \
  -u sk_test_4eC39HqLyjWDarjtT1zdp7dc: \
  -d "metadata[order_id]=6735"
.env
~/artisan/.env
############################################
# Artisan CLI Configuration (one file)
# Pick ONE provider below by setting LLM_PROVIDER.
# Optionally set LLM_MODEL to override the built-in default for that provider.
############################################

# --- Provider selection (choose ONE) ---
# Valid values: openrouter | openai | grok | gemini | custom
LLM_PROVIDER=YOUR_LLM_PROVIDER

# Your API key for the chosen provider
# - openrouter: key from openrouter.ai
# - openai:     key from platform.openai.com
# - grok:       key from x.ai
# - gemini:     API key from ai.google.dev
# - custom:     leave blank if your gateway uses no auth
LLM_API_KEY=YOUR_LLM_API_KEY

# Optional: override the model that Artisan will use for this provider.
# If blank, Artisan uses a sensible default baked into code.
# Examples:
#   openrouter → e.g. anthropic/claude-3.5-sonnet  OR qwen/qwen3-coder
#   openai     → e.g. gpt-4o-mini
#   grok       → e.g. grok-2
#   gemini     → e.g. gemini-1.5-pro
#   custom     → e.g. llama-3-70b-instruct
LLM_MODEL=gpt-5

# Optional: extra HTTP headers as JSON (useful for OpenRouter app attribution, proxies, etc.)
# Example: {"HTTP-Referer":"https://artisan.local","X-Title":"artisan"}
LLM_EXTRA_HEADERS_JSON=

# Optional: networking timeouts (seconds)
LLM_CONNECT_TIMEOUT=8

SOAR_IP=YOUR_SOAR_IP
PH_AUTH_TOKEN=YOUR_AUTH_TOKEN
PROMPT_URL=https://infra.allguard.ai/api/v1/prompt
PROMPT_API_KEY=YOUR_PROMPT_API_KEY
PROMPT_TIMEOUT=8