Using HTTP_POST_VERBOSE on IBM i for REST API Calls

IBM i provides powerful SQL functions for making HTTP requests, allowing seamless integration with REST APIs. One of the most useful functions is QSYS2.HTTP_POST_VERBOSE, which sends a POST request and returns detailed response data. This capability is perfect for connecting IBM i applications to modern web services, whether you’re sending data to an external API or caching results with Redis, as I’ve done in my redis400 project.

In this article, we’ll cover:

  • How to construct HTTP headers (including authentication and SSL settings)
  • Making POST requests with HTTP_POST_VERBOSE
  • Processing the response (status code, headers, and body)

Let’s dive in!

1. Building the HTTP Request Headers

The HTTP_POST_VERBOSE function requires HTTP headers to be passed in JSON format. Depending on your security requirements, you may need to include headers like Content-Type, Content-Length, Authorization, and SSL settings for secure connections.

Key Headers to Include

  • Content-Type: Typically application/json for REST APIs.
  • Content-Length: The length of the input data (calculated dynamically).
  • Authorization: A Bearer token for authenticated APIs.
  • SSL Settings: Options like sslCertificateStoreFile for custom certificates or sslTolerate to handle certificate issues.
-- If HTTPS is enabled
If IN_USE_HTTPS = 1 Then
    -- Use a custom certificate store (if not default)
    If IN_USE_DEFAULT_CERT_STORE = 0 Then
        Set REQ_HEADER = Json_Object(
            'header' Value 'content-type, application/json',
            'header' Value 'content-length, ' Concat Length(RTrim(IN_DATA)),
            'header' Value 'Authorization, Bearer ' Concat Trim(TOKEN),
            'sslCertificateStoreFile' Value Trim(APP_CERT_STORE)
        );
    Else
        -- Use default SSL trust (ignore some cert issues)
        Set REQ_HEADER = Json_Object(
            'header' Value 'content-type, application/json',
            'header' Value 'content-length, ' Concat Length(RTrim(IN_DATA)),
            'header' Value 'Authorization, Bearer ' Concat Trim(TOKEN),
            'sslTolerate' Value 'true'
        );
    End If;
Else
    -- Plain HTTP (no SSL)
    Set REQ_HEADER = Json_Object(
        'header' Value 'content-type, application/json',
        'header' Value 'content-length, ' Concat Length(RTrim(IN_DATA)),
        'header' Value 'Authorization, Bearer ' Concat Trim(TOKEN)
    );
End If;

This code checks if HTTPS is enabled (IN_USE_HTTPS = 1) and whether to use a custom certificate store (IN_USE_DEFAULT_CERT_STORE = 0). It then builds the REQ_HEADER JSON object with the necessary headers, including Content-Type, Content-Length, and Authorization. For HTTPS, it either specifies a custom certificate store (sslCertificateStoreFile) or tolerates minor certificate issues (sslTolerate).

2. Making the POST Request with HTTP_POST_VERBOSE

The QSYS2.HTTP_POST_VERBOSE function is designed to send HTTP POST requests and return detailed response data. It takes three parameters:

  • URL: The API endpoint (e.g., https://api.example.com/data).
  • IN_DATA: The JSON payload (as a CLOB).
  • REQ_HEADER: The headers in JSON format (constructed in the previous step).

Example: Executing the POST Request

Here’s how to execute the POST request and extract the HTTP status code from the response:

SELECT 
    IFNULL(R.HTTP_CODE, '5xx') 
INTO 
    RESP_CODE  
FROM 
    TABLE(QSYS2.HTTP_POST_VERBOSE(RTRIM(URL), RTRIM(IN_DATA), RTRIM(REQ_HEADER))) A  
CROSS JOIN 
    JSON_TABLE(
        REPLACE(A.RESPONSE_HTTP_HEADER, '=', ':'),  
        '$' COLUMNS(
            HTTP_CODE CHAR(3) PATH '$.HTTP_STATUS_CODE'  
        )
    ) R;

Key Points

  • HTTP_POST_VERBOSE returns a table with columns like RESPONSE_HTTP_HEADER (headers as a key=value string) and RESPONSE_BODY (the response data).
  • The RESPONSE_HTTP_HEADER is in a Key=Value format, so we use Replace(A.RESPONSE_HTTP_HEADER, ‘=’, ‘:’) to convert it to JSON (Key:Value) for parsing with JSON_TABLE.
  • We extract the HTTP_STATUS_CODE (e.g., 200, 401, 500) using JSON_TABLE and store it in RESP_CODE.
  • If the request fails and no status code is returned, we default to ‘5xx’ (indicating a server error).

This approach is useful for scenarios like sending data to an external API and confirming the request succeeded before proceeding with further processing.

3. Handling the Response

The HTTP_POST_VERBOSE function provides detailed response data, including:

  • HTTP Status Code: E.g., 200 (success), 401 (unauthorized), 500 (server error).
  • Response Headers: Useful for pagination, rate limits, or debugging.
  • Response Body: The actual data returned by the API (e.g., JSON, XML).

Example: Extracting Response Data

To retrieve both the status code and the response body, you can modify the query like this:

SELECT 
    R.HTTP_CODE,
    A.RESPONSE_BODY
FROM 
    TABLE(QSYS2.HTTP_POST_VERBOSE(
        RTRIM(URL), 
        RTRIM(IN_DATA), 
        RTRIM(REQ_HEADER)
    ) A
CROSS JOIN 
    JSON_TABLE(
        REPLACE(A.RESPONSE_HTTP_HEADER, '=', ':'),
        '$' COLUMNS(
            HTTP_CODE CHAR(3) PATH '$.HTTP_STATUS_CODE'
        )
    ) R;

This query returns the HTTP status code (HTTP_CODE) and the response body (RESPONSE_BODY), which you can then parse further (e.g., using JSON_TABLE if the body is JSON). For example, in my redis400 project, I use this to retrieve data from a Redis server via a REST API, ensuring I can cache the response efficiently.

Best Practices

To make the most of HTTP_POST_VERBOSE, follow these best practices:

  • Always Check the Response Code: Handle 4xx (client errors) and 5xx (server errors) appropriately. For example, a 401 might indicate an expired token, while a 429 suggests rate limiting.
  • Use HTTPS in Production: Avoid plain HTTP to ensure data security, especially when sending sensitive data.
  • Log Errors and Performance: Track API failures and response times to identify issues early. You can store logs in a database table or a user space on IBM i.
  • Retry on Transient Failures: For errors like 429 Too Many Requests, implement a retry mechanism with exponential backoff to handle transient issues gracefully.

Conclusion

IBM i’s QSYS2.HTTP_POST_VERBOSE function is a powerful tool for integrating with REST APIs directly from SQL, enabling seamless communication with modern web services. By properly constructing headers, making POST requests, and handling responses, you can connect IBM i to a wide range of systems—whether it’s caching data with Redis in my redis400 project or integrating with AI APIs for advanced analytics. This approach simplifies development and ensures your IBM i applications can interact with the modern web efficiently.

No Comments IBM i, SQL

REDIS400: Redis SQL Functions for IBM i: A Comprehensive Tutorial

Introduction

Welcome to this comprehensive tutorial on using Redis SQL Functions for IBM i. This powerful integration allows IBM i developers to leverage Redis’s high-performance caching capabilities directly from SQL queries. Whether you’re looking to implement session storage, application caching, or real-time data processing, these functions provide a seamless bridge between your IBM i database and Redis.

Prerequisites

Before we begin, ensure you have:

  1. Access to an IBM i system with development tools (ILE C compiler, SQL)
  2. A running Redis server accessible from your IBM i (default: 127.0.0.1:6379)
  3. Basic familiarity with IBM i development and Redis concepts

Installation and Setup

Step 1: Clone the Repository

git clone https://github.com/krakadin/redis400.git
cd redis400

Step 2: Build the Project

gmake

This will:

  1. Create the REDIS400 library
  2. Compile all C modules
  3. Create the service program
  4. Register all SQL functions

Step 3: Verify Installation

Check that all objects were created successfully:

SELECT * FROM QSYS2.SYSFUNCS 
WHERE SPECIFIC_NAME LIKE 'REDIS_%';

Example:

-- Store a simple key-value pair
VALUES REDIS_SET('app:config:timeout', '30');

-- Store JSON data
VALUES REDIS_SET('user:1001', '{"name":"John","email":"john@example.com"}');

Best Practices:

  • Use descriptive key names with a consistent naming convention
  • Consider serializing complex data to JSON before storage
  • For temporary data, combine with REDIS_EXPIRE (covered later)

2. REDIS_GET – Retrieving Data

Syntax:

SELECT REDIS_GET(key) FROM SYSIBM.SYSDUMMY1;

Example:

-- Retrieve a simple value
SELECT REDIS_GET('app:config:timeout') AS timeout FROM SYSIBM.SYSDUMMY1;

-- Use in a more complex query
SELECT u.user_id, u.user_name, REDIS_GET('user_pref:' || u.user_id) AS preferences
FROM users u
WHERE u.department = 'SALES';

-- From JSON data
SELECT x.*
    FROM JSON_TABLE(
            REDIS_GET('user:1001'),
            '$'
            EMPTY ON ERROR COLUMNS(
                name VARCHAR(256) PATH '$.name', 
                email VARCHAR(256) PATH '$.email', 
                age INTEGER PATH '$.age' DEFAULT 27 ON EMPTY
            )
        ) AS x;

Handling Results:

  • Returns NULL if the key doesn’t exist
  • For numeric values, you may need to CAST the result: CAST(REDIS_GET('count') AS INTEGER)

3. REDIS_INCR – Atomic Counter

Syntax:

VALUES REDIS_INCR(key);

Example:

-- Create or increment an order counter
VALUES REDIS_INCR('orders:current_id');

-- Use in an application
SET order_id = REDIS_INCR('orders:current_id');
INSERT INTO orders VALUES(order_id, CURRENT_TIMESTAMP, ...);

Key Features:

  • Atomic operation – safe for concurrent access
  • If key doesn’t exist, initializes to 0 then increments to 1
  • Returns the new value after increment

4. REDIS_DEL – Removing Data

Syntax:

VALUES REDIS_DEL(key);

Example:

-- Remove a single key
VALUES REDIS_DEL('temp:session:abc123');

-- Clean up old data (in a stored procedure)
BEGIN
  DECLARE keys_deleted INT;
  SET keys_deleted = REDIS_DEL('old:data:*');
  -- Returns count of keys deleted
END

Return Value:

  • 1 if the key was deleted
  • 0 if the key didn’t exist

5. REDIS_EXPIRE – Setting Expiration

Syntax:

SELECT REDIS_EXPIRE(key, seconds) FROM SYSIBM.SYSDUMMY1;

Example:

-- Set a 30-minute expiration
SELECT REDIS_EXPIRE('user:session:abc123', 1800) FROM SYSIBM.SYSDUMMY1;

-- Combine with SET in a transaction
BEGIN
  DECLARE success INT;
  VALUES REDIS_SET('temp:report:status', 'processing');
  SET success = REDIS_EXPIRE('temp:report:status', 3600); -- 1 hour
END

Return Value:

  • 1 if timeout was set
  • 0 if key doesn’t exist

6. REDIS_TTL – Checking Expiration

Syntax:

SELECT REDIS_TTL(key) FROM SYSIBM.SYSDUMMY1;

Example:

-- Check how long until a session expires
SELECT REDIS_TTL('user:session:abc123') AS seconds_remaining 
FROM SYSIBM.SYSDUMMY1;

-- Monitor cache expiration
SELECT cache_key, REDIS_TTL(cache_key) AS ttl
FROM cached_items
WHERE REDIS_TTL(cache_key) BETWEEN 1 AND 300; -- Expiring in 1-5 minutes

Return Values:

  • Positive integer: Remaining time in seconds
  • -1: Key exists but has no associated expiration
  • -2: Key does not exist

7. REDIS_PING – Testing Connectivity

Syntax:

VALUES(REDIS_PING());

Example:

-- Simple connection test
VALUES(REDIS_PING());

-- Use in a health check procedure
BEGIN
  DECLARE status VARCHAR(10);
  SET status = REDIS_PING();
  IF status = 'PONG' THEN
    -- Redis is responsive
  ELSE
    -- Handle connection issue
  END IF;
END

Practical Use Cases

1. Session Management

-- Store session data with expiration
VALUES REDIS_SET('session:' || session_id, session_data);
VALUES REDIS_EXPIRE('session:' || session_id, 1800); -- 30 minutes

-- Check session validity
SELECT 
  REDIS_GET('session:' || session_id) AS session_data,
  REDIS_TTL('session:' || session_id) AS ttl
FROM SYSIBM.SYSDUMMY1;

2. Application Caching

-- Cache expensive query results
CREATE PROCEDURE get_product_details(IN product_id INT)
BEGIN
  DECLARE cache_key VARCHAR(128);
  DECLARE cached_result CLOB;
  
  SET cache_key = 'product:' || product_id;
  SET cached_result = REDIS_GET(cache_key);
  
  IF cached_result IS NULL THEN
    -- Not in cache, run the query
    SELECT JSON_OBJECT(
      'id' VALUE p.id,
      'name' VALUE p.name,
      'price' VALUE p.price,
      'stock' VALUE p.stock
    ) INTO cached_result
    FROM products p
    WHERE p.id = product_id;
    
    -- Store in cache with 1-hour expiration
    VALUES REDIS_SET(cache_key, cached_result);
    VALUES REDIS_EXPIRE(cache_key, 3600);
  END IF;
  
  -- Return the result
  RETURN cached_result;
END

3. Rate Limiting

-- Track API calls per user
CREATE PROCEDURE check_rate_limit(IN user_id INT)
BEGIN
  DECLARE key VARCHAR(128);
  DECLARE calls INT;
  
  SET key = 'api_limit:' || user_id;
  SET calls = REDIS_INCR(key);
  
  -- If this was the first call, set expiration
  IF calls = 1 THEN
    VALUES REDIS_EXPIRE(key, 3600); -- 1-hour window
  END IF;
  
  -- Check if over limit (100 calls/hour)
  IF calls > 100 THEN
    SIGNAL SQLSTATE '75001' SET MESSAGE_TEXT = 'Rate limit exceeded';
  END IF;
END

Performance Considerations

  1. Network Latency: Each Redis operation involves a network call. Minimize round trips by:
    • Using Redis effectively for caching rather than primary storage
    • Batching operations where possible
  2. Connection Management: The UDFs manage connections efficiently, but:
    • Avoid extremely frequent calls in tight loops
    • Consider connection pooling for high-volume applications
  3. Data Size: Redis performs best with smaller values:
    • Keep individual values under 100KB when possible
    • For larger data, consider splitting or using traditional DB storage

Troubleshooting

Common Issues and Solutions:

  1. Connection Errors:
    • Verify Redis server is running: VALUES(REDIS_PING())
    • Check network connectivity between IBM i and Redis server
    • Confirm correct host/port in configuration
  2. Timeout Errors:
    • Increase timeout in Redis configuration if needed
    • Check for network latency issues
  3. Function Not Found:
    • Verify the functions were created successfully
    • Check your library list includes REDIS400

Monitoring Usage

Track Redis function usage:

SELECT 
  function_name, 
  COUNT(*) as call_count,
  AVG(elapsed_time) as avg_time
FROM QSYS2.FUNCTION_STATISTICS
WHERE function_schema = 'REDIS400'
GROUP BY function_name;

Conclusion

The Redis SQL Functions for IBM i provide a powerful way to integrate Redis’s high-performance caching with your IBM i applications. By mastering these functions, you can:

  1. Implement efficient caching layers to reduce database load
  2. Build robust session management systems
  3. Create atomic counters for ID generation or rate limiting
  4. Add expiration features to temporary data
  5. Enhance application performance with Redis’s in-memory speed

For more advanced usage, consider exploring:

  • Redis data structures (hashes, lists, sets)
  • Pub/sub functionality for event-driven architectures
  • Lua scripting for complex atomic operations

Remember to monitor your Redis usage and adjust configurations as your application scales. Happy coding with Redis on IBM i!

Bringing Redis to IBM i with REDIS400

If you’re running IBM i, you’re sitting on a powerhouse for business workloads, trusted by industries like finance, retail, and manufacturing for its rock-solid reliability and integrated Db2 for i database. But in an era of real-time applications—think e-commerce, IoT, or live analytics—speed is everything. What if you could supercharge IBM i with Redis, the world’s fastest in-memory data store? That’s where Redis400 comes in: my custom-built integration that brings Redis’s lightning-fast capabilities to IBM i, tailored for flexibility, performance, and native compatibility. Whether you’re caching API responses, managing web sessions, or streaming live data, Redis400 makes your IBM i applications soar while staying true to the platform’s legacy roots.

As I prepare er400.io for its 2025 launch—a hub for IBM i modernization tools and tutorials—Redis400 is a flagship project showcasing my knack for system-level solutions. It’s open-source, available at https://github.com/krakadin/redis400, and built to empower developers to make IBM i systems talk smarter. Let’s dive into what Redis400 is, how it works, and why it’s a game-changer for IBM i.

What is Redis400?

Redis400 is a set of SQL User-Defined Functions (UDFs) written in ILE C, designed to integrate Redis with IBM i’s Db2 for i. These functions let you interact with a Redis server directly from SQL queries, RPG, COBOL, or any ILE language, enabling high-performance data operations without leaving the IBM i ecosystem. The core functions include:

  • REDIS_SET: Stores a key-value pair in Redis (e.g., cache a customer record).
  • REDIS_GET: Retrieves a value by key, with sub-millisecond latency.
  • REDIS_INCR: Increments an integer counter (e.g., for order numbers or analytics).
  • REDIS_DEL: Deletes a key, freeing memory.
  • REDIS_EXPIRE (added March 12, 2025): Sets a time-to-live (TTL) for a key, ideal for temporary data like sessions.
  • REDIS_TTL (added March 12, 2025): Checks a key’s remaining TTL, useful for managing expirations.
  • REDIS_PING (added March 30, 2025): Tests Redis server connectivity, returning “PONG” if successful.

Each function is compiled into a standalone ILE module, which you can link individually into your ILE programs (RPG, COBOL) for lightweight integration. Alternatively, they’re bundled into a service program (redisile.srvpgm), exposing UDFs for SQL access, so you can call Redis operations like SELECT REDIS_GET(‘key’) from Db2. This dual approach—ILE modules for programmatic control and SQL UDFs for database-driven apps—gives you unmatched flexibility to modernize IBM i applications.

Redis400 is IBM i-native, built around socket programming to communicate with Redis over TCP/IP. It handles EBCDIC-to-ASCII conversions transparently, ensuring compatibility with IBM i’s unique encoding while leveraging Redis’s ASCII-based protocol. The result? You can automate tasks like setting session data from a CL script, fetch cached reports from an RPG app without Db2 queries, or stream real-time events from a PHP web app—all with Redis’s blazing speed.

Why Redis400 for IBM i?

With over 100,000 IBM i installations worldwide, the platform is a cornerstone for enterprises, offering unmatched stability and a tightly integrated stack (Db2, RPG, CL). But its strength—optimized for transactional workloads—can be a bottleneck for real-time demands, where frequent Db2 queries or complex joins slow things down. Redis, a NoSQL in-memory database, excels at low-latency tasks like caching, session management, and event streaming, complementing Db2’s relational power. Redis400 bridges these worlds, letting you:

  • Cache Data: Store hot Db2 records (e.g., product catalogs) in Redis, cutting query times from milliseconds to microseconds.
  • Manage Sessions: Handle web app sessions with REDIS_EXPIRE, perfect for PHP or Node.js frontends on IBM i.
  • Track Analytics: Use REDIS_INCR for real-time counters, like page views or order numbers.
  • Process Events: Leverage Redis streams (via future UDFs) for logging user actions or IoT data.
  • Rate Limit APIs: Implement counters to secure web services, enhancing IBM i’s HTTP server capabilities.

Unlike generic Redis clients (e.g., ioredis for Node.js or redis-py for Python), Redis400 is tailored for IBM i’s PASE (Portable Application Solutions Environment) and ILE (Integrated Language Environment). It eliminates middleware complexity, making Redis accessible to traditional IBM i developers (RPG, COBOL) and modern ones (PHP, SQL) alike.

How It Works: Technical Deep Dive

Redis400’s architecture is a masterclass in IBM i engineering, blending legacy compatibility with modern performance:

  • ILE C Functions: Each UDF (e.g., REDIS_SET, REDIS_GET) is written in ILE C, compiled into modules (REDISGET, REDISSET, etc.) using IBM i’s CRTCMOD. These modules are bound into a service program (redisile.srvpgm) for SQL access or linked directly into ILE programs.
  • Socket Programming: Redis400 uses TCP/IP sockets to connect to a Redis server (local via PASE or remote). It sends Redis commands in RESP (REdis Serialization Protocol) format, ensuring
  • EBCDIC-ASCII Conversion: IBM i uses EBCDIC encoding, while Redis expects ASCII. Redis400 handles conversions natively, so your RPG or SQL code doesn’t need to manage encoding.
  • SQL UDFs: Registered in Db2, UDFs like REDIS_GET or REDIS_EXPIRE are callable from SQL queries, CL scripts, or embedded in RPG/COBOL via SQL, making integration seamless.
  • Makefile Automation: A Makefile streamlines compilation, binding, and deployment, creating the REDIS400 library, modules, service program, and UDFs with a single gmake command.

Getting Started with Redis400

Ready to unleash Redis400? Here’s how to set it up:

  1. Prerequisites:
    • IBM i Access: An IBM i system (V7.3+) with ILE C compiler (5770WDS 51 2911) and SQL.
    • Redis Server: Running locally via PASE (yum install redis) or remotely (e.g., Redis Cloud). Default: 127.0.0.1:6379.
    • Makefile Support: Install make via yum install make in PASE.
    • Git: For cloning the repo (git clone https://github.com/krakadin/redis400).
  2. Install Redis400:
    • Clone the repo and navigate to the directory: cd redis400.
    • Fix Git settings if needed: git config –global core.autocrlf input.
    • Run gmake to:
      • Create the REDIS400 library.
      • Compile C modules (redisget.c, redisset.c, etc.) into ILE modules.
      • Bind modules into redisile.srvpgm.
      • Register UDFs (REDIS_GET, REDIS_SET, etc.) in Db2.
  3. Verify Installation:
    • Check modules: DSPOBJD OBJ(REDIS400/REDISGET) OBJTYPE(*MODULE).
    • Check service program: DSPOBJD OBJ(REDIS400/REDISILE) OBJTYPE(*SRVPGM).
    • Check UDFs: SELECT * FROM QSYS2.SYSFUNCS WHERE SPECIFIC_NAME = ‘REDIS_GET’.
  4. Configure Redis:
    • Edit /QOpenSys/etc/redis.conf (PASE) to bind to 127.0.0.1:6379.
    • Start Redis: redis-server /QOpenSys/etc/redis.conf –daemonize yes.
    • Test connectivity: redis-cli -h 127.0.0.1 -p 6379 PING (expect “PONG”).
  5. Use Redis400:
    • Run SQL queries: SELECT REDIS_GET(‘key’) FROM SYSIBM.SYSDUMMY1.
    • Embed in RPG: Call REDIS_SET to cache data.
    • Script in CL: Automate tasks like session cleanup.

The GitHub repo (https://github.com/krakadin/redis400) includes source code, a Makefile, and examples, tested on IBM i V7.3 and V7.4.

Expanded Use Cases

Redis400 unlocks a range of applications for IBM i, blending legacy and modern needs:

  • E-Commerce: Cache product details to speed up web storefronts, reducing Db2 load. Example: VALUES REDIS_SET(‘product:123’, ‘Laptop, $999’).
  • Session Management: Store web app sessions with REDIS_EXPIRE for PHP apps on IBM i’s HTTP server. Example: SELECT REDIS_EXPIRE(‘session:abc’, 3600).
  • Real-Time Analytics: Track page views or logins with REDIS_INCR. Example: SET views = REDIS_INCR(‘page:home’).
  • Job Queues: Use Redis lists (via future UDFs) for lightweight batch jobs. Example: REDIS_LIST(‘order_queue’, ‘RPUSH’, ‘order:456’).
  • API Rate Limiting: Limit API calls with REDIS_INCR and REDIS_TTL, enhancing security. Example: SELECT REDIS_INCR(‘api:user123’).
  • Health Checks: Use REDIS_PING to monitor Redis server uptime in production. Example: VALUES REDIS_PING().

These use cases make IBM i a contender in modern app development, from web services to IoT.

Why Open Source?

Redis400 is open-source under the MIT License, reflecting my belief in collaborative innovation. The GitHub repo (https://github.com/krakadin/redis400) is a hub for developers to:

  • Fork and Customize: Adapt Redis400 for your IBM i setup.
  • Contribute: Add UDFs (e.g., HSET, SCAN) or optimize socket performance.
  • Share Ideas: Propose integrations with PHP frameworks or Python, aligning with er400.io’s vision.

The repo includes C source files, a Makefile, and detailed docs, making it accessible to IBM i developers and newcomers. As er400.io gears up for its 2025 launch, Redis400 is a cornerstone, showcasing how open-source tools can modernize legacy systems while fostering community.

Challenges and Future Roadmap

Redis400 is powerful but faces hurdles:

  • Niche Audience: IBM i’s declining developer base (many nearing retirement) limits adoption.
  • PASE Setup: Running Redis locally requires PASE expertise, which can be daunting on older systems.
  • Authentication: Redis400 doesn’t yet support Redis’s AUTH command, requiring unauthenticated servers.
  • Scalability: High-traffic apps may need Redis clusters, demanding network optimization.

Future enhancements include:

  • Support for more Redis commands (APPEND, AUTH, SCAN).
  • A GUI (e.g., like Redis Insight) for managing Redis data on IBM i.

Join the Revolution

Redis400 isn’t just a tool—it’s a movement to keep IBM i thriving in the age of real-time apps. Whether you’re an RPG guru, a PHP developer, or a system admin, Redis400 lets you tap Redis’s speed without leaving IBM i’s ecosystem. Grab it at https://github.com/krakadin/redis400, run the examples, and share your feedback. Got ideas for new UDFs, PHP integrations, or er400.io’s 2025 vision? Hit me up via GitHub or er400.io’s contact form. Let’s build the future of IBM i, one Redis command at a time!

Author: Ernest Rozloznik
Acknowledgments: Thanks to the IBM i and Redis communities for their tools and inspiration.

No Comments Open Source, Redis, SQL

Create a Self-Signed Certificate with OpenSSL on IBM i

How to Install an SSL Certificate on IBM i

Installing SSL certificates on IBM i can be done effectively using OpenSSL. In this guide, we will walk through generating a private key, creating a certificate signing request (CSR), generating a self-signed certificate, and converting it to PKCS12 format if needed.

Prerequisites:

  • OpenSSL installed on your IBM i system.
  • Access to QP2TERM (PASE for i).

Step 1: Access IBM i PASE (QP2TERM)

To begin, you need to access the QP2TERM terminal, which allows you to execute OpenSSL commands on IBM i. Use the following command to launch the terminal:

 

CALL QP2TERM

Step 2: Generate a Private Key and CSR (Certificate Signing Request)

To create a new private key and a CSR, you can use either of the following approaches:

Option 1: Generating a Key and Self-Signed Certificate in One Step

Use this OpenSSL command to generate a private key and a self-signed certificate in one step:

openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem

Option 2: Separate Key and CSR Generation

Alternatively, you can generate the private key and CSR in separate steps:

  • Generate a private key (password protected):
openssl genrsa -des3 -out server.key 1024
  • Generate a CSR:
openssl req -new -key server.key -out server.csr

Optional: Remove the password from the private key:

  • Make a backup of the original private key:
cp server.key server.key.org
  • Remove the password:
openssl rsa -in server.key.org -out server.key

Step 3: Generate a Self-Signed Certificate

Once the private key and CSR are generated, use them to create a self-signed SSL certificate:

openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

Step 4: Convert to PKCS12 Format (Optional)

If needed, you can convert the generated certificate and key into a PKCS12 format (.pfx):

openssl pkcs12 -export -out exported.pfx -inkey server.key -in server.crt

Step 5: Use the SSL Certificate on IBM i

After generating the certificate, import it into your IBM i server’s digital certificate manager (DCM) or use it in your web server or other applications.


Reference

For additional details, visit IBM’s support page on creating self-signed certificates using OpenSSL.


No Comments IBM i, Open Source, SSL

Importing/Indexing Database (MySQL or SQL Server) in Solr Using Data Import Handler

Importing/Indexing Database (MySQL or SQL Server) in Solr Using Data Import Handler


Step 1: Install Solr

Download and install Solr from Solr Website. Access Solr admin via http://localhost:8983/solr/.


Step 2: Set Up Connectors

  • MySQL: Download JDBC driver and place mysql-connector-java-*.jar in contrib/dataimporthandler/lib.
  • SQL Server: Download Microsoft JDBC driver and copy sqljdbc4.jar to the same directory.

Step 3: Create New Collection

Create a new collection folder like /solr/myproducts/conf. Copy solrconfig.xml and modify to include:

<lib dir="../../contrib/dataimporthandler/lib" regex=".*\.jar" />
<requestHandler name="/dataimport" class="org.apache.solr.handler.dataimport.DataImportHandler">
    <lst name="defaults">
        <str name="config">data-config.xml</str>
    </lst>
</requestHandler>

Step 4: Create data-config.xml

  • For MySQL:
<dataSource type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/mydb1" user="root" password=""/>
<entity name="product" query="select id, name from products" deltaImportQuery="..." deltaQuery="...">

For SQL Server:

<dataSource type="JdbcDataSource" driver="com.microsoft.sqlserver.jdbc.SQLServerDriver" url="jdbc:sqlserver://servername..." user="sa" password="mypass"/>

Step 5: Edit schema.xml

Add field mappings based on your DB structure:

<field name="id" type="string" indexed="true" stored="true" required="true"/>
<field name="name" type="text_general" indexed="true" stored="true"/>

Step 6: Perform Data Import

  • Full import: http://localhost:8983/solr/myproducts/dataimport?command=full-import
  • Delta import: http://localhost:8983/solr/myproducts/dataimport?command=delta-import

References

No Comments Solr

NGINX on IBM i

To start default configuration:

===> /QOpenSys/pkgs/bin/nginx -c /QOpenSys/etc/nginx/nginx.conf

To stop:

===> /QOpenSys/pkgs/bin/nginx -c /QOpenSys/etc/nginx/nginx.conf -s stop

To list processes:

===> ps aux | grep nginx

or

===> ps ax | grep nginx
===> ps -f -u ernest

Sample nginx configuration file:

worker_processes  3;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;

    server {
        listen       9010;
        server_name  localhost;

        location / { try_files $uri @er; }
		location @er {
		    include fastcgi_params;
		    fastcgi_param PATH_INFO $fastcgi_script_name;
		    fastcgi_param SCRIPT_NAME "";
		    fastcgi_pass unix:/tmp/er9010f-fcgi.sock;
		}

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
        location ~ /\.ht {
            deny  all;
        }
    }
}

More on ps command: https://www.binarytides.com/linux-ps-command/

No Comments IBM i, NGINX, Open Source

Open Source on IBM i

===> /QOpenSys/pkgs/bin/yum install <package>

Add yum (and other packages) to your path if want to call it directly

===> PATH=/QOpenSys/pkgs/bin:$PATH
===> export PATH

or if you want to store it permanently in your profile

===> echo 'PATH=/QOpenSys/pkgs/bin:$PATH' >> $HOME/.profile
===> export PATH >> $HOME/.profile

Some useful yum commands

===> yum list available
===> yum list installed
===> yum list all
===> yum search <package>
===> yum remove <package>

Use rpm for more detailed info about packages. Following will show when was each individual package updated.

===> rpm -qa --last
===> rpm -q <package> --last

This command will list all installed files for a package

===> rpm -ql <package>

 

References:

http://www-01.ibm.com/support/docview.wss?uid=nas8N1022619

https://bitbucket.org/ibmi/opensource/src/master/docs/yum/

No Comments IBM i, Open Source, PASE

Install Zend Expressive on IBM i

$ composer create-project zendframework/zend-expressive-skeleton expressive
$ composer require zendframework/zend-db
$ composer require --dev "zendframework/zend-expressive-tooling:^0.4.1"
$ ./vendor/bin/expressive module:create MyModule
./vendor/bin/expressive middleware:create "MyModule\Action\ListModuleAction"
$ composer require tuupola/cors-middleware
$ composer require zendframework/zend-stratigility

 

No Comments Uncategorized