Read Operations
Keys
Every record is identified by a key tuple: (namespace, set, primary_key).
key = ("test", "demo", "user1") # string PK
key = ("test", "demo", 12345) # integer PK
key = ("test", "demo", b"\x01\x02") # bytes PK
Read
- Sync
- Async
from aerospike_py import Record
record: Record = client.get(key)
print(record.bins) # {"name": "Alice", "age": 30}
print(record.meta.gen) # 1
print(record.meta.ttl) # 2591998
# Tuple unpacking (backward compat)
_, meta, bins = client.get(key)
# Read specific bins
record = client.select(key, ["name"])
# record.bins = {"name": "Alice"}
record: Record = await client.get(key)
_, meta, bins = await client.get(key)
record = await client.select(key, ["name"])
Exists
from aerospike_py import ExistsResult
result: ExistsResult = client.exists(key) # or: await client.exists(key)
if result.meta is not None:
print(f"gen={result.meta.gen}")
Batch Read
Read multiple records in a single network call.
- Sync
- Async
keys: list[tuple] = [("test", "demo", f"user_{i}") for i in range(10)]
# All bins
batch = client.batch_read(keys)
for br in batch.batch_records:
if br.record:
print(br.record.bins)
# Specific bins
batch = client.batch_read(keys, bins=["name", "age"])
# Existence check only
batch = client.batch_read(keys, bins=[])
batch = await client.batch_read(keys, bins=["name", "age"])
for br in batch.batch_records:
if br.record:
print(br.record.bins)
Tips
- Batch size: 100-5,000 keys per batch is optimal. Very large batches may timeout.
- Timeouts: Increase
total_timeoutfor large batch operations. - Error handling: Individual batch records can fail independently. Always check
br.recordforNone.