Console
Total API Calls
0
+12% from last weekActive API Keys
0
Local Storage
0 KB
Quick Actions
System Status
API Key Management
Manage and generate API keys to access the Genesis AI models.
Generate New Key
Create a new key. You will only see the full key once.
Your new API Key (Copy Now!)
Active Keys
The keys below are currently active and ready for use. **The full key value is never stored after generation.**
No active keys found. Generate one to get started!
Playground
Type a message below to start testing the model.
Enter sequences that stop generation.
Prompt Library
Saved prompts and configurations.
No saved prompts yet.
Available Models
Explore the different AI models (modals) available on the Genesis platform.
Genesis SPT-4.6
LatestThe newest and most capable JSON model. Optimized for complex reasoning and broader knowledge coverage. Human-readable and easy to edit.
Format: Standard JSON
Parameters: 5079
URL: modals/Genesis-SPT-4.6.json
Genesis SPT-4.5
Binary (.bin)A high-performance binary model. Optimized for speed and general knowledge. Uses a custom binary format for efficient client-side decoding.
Format: Custom Binary
Parameters: 1446
URL: modals/Genesis-SPT-4.5-240126P1105M.bin
Genesis SPT-1.0
Legacy JSONThe original JSON-based model. Human-readable and easy to edit. Best for understanding the underlying structure of Genesis AI responses.
Format: Standard JSON
Size: ~450 KB
URL: modals/Genesis-SPT-1.0.json
Tuned Models
Fine-tune Genesis models with your own datasets.
No tuned models found.
Batch Testing
Run prompts against multiple inputs to validate consistency.
| Input Variable | Model Output | Status |
|---|
Settings
Appearance
Data Management
Clear all locally stored data including API keys, saved prompts, and tuned model metadata.
API Key Usage Guide
Your secret API key (gs_sk_...) is used to authenticate your requests to the Genesis AI platform. Follow these best practices when integrating the key into your applications.
Authentication Method (Bearer Token)
All requests to the Genesis AI API must include your secret key in the Authorization HTTP header, formatted as a Bearer Token.
JavaScript Fetch Example:
async function generateContent(apiKey, prompt) {
// Validate API Key Structure
const keyRegex = /^gs_sk_[0-9a-f]{8}_[0-9a-f]{8}_[0-9a-f]{8}TO(\d+|UNL1M)$/i;
const match = apiKey.match(keyRegex);
if (!match) {
throw new Error("Invalid API Key: Must match format gs_sk_...TO");
}
const tokenLimit = match[1] === 'UNL1M' ? Infinity : parseInt(match[1]);
// Call this endpoint for full AI generation
const response = await fetch('https://xpdevs.github.io/Genesis-AI/developers/api/generate', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer ' + apiKey + ''
},
body: JSON.stringify({
model: "genesis-spt-1.0",
prompt: prompt,
max_tokens: 128
})
});
if (!response.ok) {
throw new Error(`API request failed with status: ${response.status}`);
}
const data = await response.json();
// Enforce Token Limit (Word Count)
if (tokenLimit !== Infinity && data.text) {
const words = data.text.split(/\s+/);
if (words.length > tokenLimit) data.text = words.slice(0, tokenLimit).join(" ");
}
return data;
}
// NOTE: In a production environment, 'apiKey' should be loaded
// from a secure server-side environment variable on a backend server.
const mySecretKey = "[YOUR_FULL_API_KEY]";
generateContent(mySecretKey, "Write a headline for a new productivity app.")
.then(result => console.log(result.text))
.catch(error => console.error(error));
Security Best Practices
- Never Hardcode: Do not embed your secret key directly in front-end code (HTML, client-side JavaScript).
- Use Environment Variables: Store the key in environment variables on your server or CI/CD system.
- Do Not Share: Treat your key like a password. If compromised, revoke it immediately on the "API Keys" page.
Client-Side Binary Model Integration
Genesis AI models are distributed as high-performance binary files. To use them directly in your application, you must decode them using the following protocol.
1. The Decoder Function
function decodeBinary(buffer) {
const bytes = new Uint8Array(buffer);
const view = new DataView(buffer);
const XOR_KEY = 0xAA;
const decoder = new TextDecoder('utf-8');
let jsonString = "";
let i = 0;
// Signature Check (GNIS)
try {
if (view.getUint32(0, true) === 0x53494E47) i = 4;
} catch (e) { i = 0; }
while (i < bytes.length) {
const b = bytes[i];
switch(b) {
case 0x01: jsonString += "{"; break;
case 0x02: jsonString += "}"; break;
case 0x03: jsonString += ":"; break;
case 0x04:
if (i + 1 < bytes.length && bytes[i + 1] !== 0x02 && bytes[i + 1] !== 0x06) jsonString += ",";
break;
case 0x05: jsonString += "["; break;
case 0x06: jsonString += "]"; break;
case 0x07:
i++; let start = i;
while (i < bytes.length && bytes[i] !== 0x00) i++;
const chunk = bytes.slice(start, i);
const decrypted = new Uint8Array(chunk.length);
for (let j = 0; j < chunk.length; j++) decrypted[j] = chunk[j] ^ XOR_KEY;
jsonString += '"' + decoder.decode(decrypted) + '"';
break;
}
i++;
}
return JSON.parse(jsonString.trim());
}
2. Fetching and Using the Modal
async function loadGenesisModel() {
const response = await fetch('https://xpdevs.github.io/Genesis-AI/modals/Genesis-SPT-4.5-240126P1105M.bin');
const buffer = await response.arrayBuffer();
const knowledgeBase = decodeBinary(buffer);
console.log("Model Loaded. Keys:", Object.keys(knowledgeBase).length);
// Example Query
const query = "hello";
if (knowledgeBase[query]) {
console.log("Response:", knowledgeBase[query]);
}
}
loadGenesisModel();
Selective Logic Loading
Important: Do Not Link Directly
The main.js file contains automatic initialization logic (redirects and UI rendering) that will conflict with your application.
Do not use <script src="...">. Instead, manually extract the functions you need.
Recommended Extraction Pattern
To use specific features like the Safety System or Response Engine, copy these functions directly:
1. Safety System (Banned Words)
let bannedWords = [];
async function loadBannedWords() {
try {
const res = await fetch("https://xpdevs.github.io/Genesis-AI/js/banned/words.json?v=" + Date.now());
if (res.ok) bannedWords = await res.json();
} catch (err) { console.error("Error loading banned words:", err); }
}
loadBannedWords();
function violatesRules(text) {
if (!bannedWords.length) return false;
const lowerText = text.toLowerCase();
return bannedWords.some(word => new RegExp(\`\\\\b\${word}\\\\b\`, 'i').test(lowerText));
}
2. Response Engine (Fuzzy Matcher)
function findResponses(input) {
const lowerInput = input.toLowerCase();
const foundMatches = [];
// Ensure 'responses' object is loaded from the modal
const sortedKeys = Object.keys(responses).sort((a, b) => b.length - a.length);
let tempInput = lowerInput;
sortedKeys.forEach(key => {
const lowerKey = key.toLowerCase();
let index = tempInput.indexOf(lowerKey);
while (index !== -1) {
foundMatches.push({ text: responses[key], index: index });
tempInput = tempInput.substring(0, index) + ' '.repeat(lowerKey.length) + tempInput.substring(index + lowerKey.length);
index = tempInput.indexOf(lowerKey);
}
});
if (foundMatches.length === 0) return { role: "ai", text: "I’m not quite sure I follow." };
const orderedMessages = foundMatches.sort((a, b) => a.index - b.index).map(m => m.text);
if (orderedMessages.length === 1) return { role: "ai", text: orderedMessages[0] };
const last = orderedMessages.pop();
return { role: "ai", text: orderedMessages.join(", ") + " and " + last };
}