part, I did an exploratory knowledge evaluation of the gamma spectroscopy knowledge. We had been capable of see that utilizing a contemporary scintillation detector, we can’t solely see that the item is radioactive. With a gamma spectrum, we’re additionally capable of inform why it’s radioactive and how much isotopes the item incorporates.
On this half, we’ll go additional, and I’ll present find out how to make and practice a machine studying mannequin for detecting radioactive parts.
Earlier than we start, an necessary warning. All knowledge information collected for this text can be found on Kaggle, and readers can practice and take a look at their ML fashions with out having actual {hardware}. If you wish to take a look at actual objects, do it at your personal threat. I did my checks with sources that may be legally discovered and bought, like classic uranium glass or outdated watches with radium dial paint. Please examine your native legal guidelines and browse security pointers about dealing with radioactive supplies. Sources used on this take a look at usually are not significantly harmful, however nonetheless have to be dealt with with care!
Now, let’s get began! I’ll present find out how to accumulate the information, practice the mannequin, and run it utilizing a Radiacode scintillation detector. For these readers who wouldn’t have Radiacode {hardware}, the hyperlink to the datasource is added on the finish of the article.
Methodology
This text will comprise a number of components:
- I’ll briefly clarify what a gamma spectrum is and the way we will use it.
- We are going to accumulate the information for our ML mannequin. I’ll present the code for gathering the spectra utilizing the Radiacode system.
- We are going to practice the mannequin and management its accuracy.
- Lastly, I’ll make an HTMX-based net frontend for the mannequin, and we’ll see the ends in real-time.
Let’s get into it!
1. Gamma Spectrum
This can be a brief recap of the first part, and for extra particulars, I extremely suggest studying it first.
Why is the gamma spectrum so attention-grabbing? Some objects round us may be barely radioactive. Its sources differ from the naturally occurring radiation of granite within the buildings to the radium in some classic watches or the thorium in trendy thoriated tungsten rods. A Geiger counter solely exhibits us the variety of radioactive particles that had been detected. A scintillation detector exhibits us not solely the variety of particles but in addition their energies. This can be a essential distinction—it turned out that totally different radioactive supplies emit gamma rays with totally different energies, and every materials has its personal “footprint.”
As a primary instance, I purchased this pendant within the Chinese language store:
It was marketed as an “ion-generating,” so I already suspected that the pendant may very well be barely radioactive (an ionizing radiation, as its identify suggests, can produce ions). Certainly, as we will see on the meter display screen, its radioactivity degree is about 1,20 µSv/h, which is 12 occasions larger than the background (0,1 µSv/h). It’s not loopy excessive and corresponding to a degree on an airplane throughout the flight, however it’s nonetheless statistically vital 😉
Nevertheless, by solely observing the worth, we can’t inform why the item is radioactive. A gamma spectrum will present us what isotopes are inside the item:

On this instance, the pendant incorporates thorium-232, and a thorium decay chain produces radium and actinium. As we will see on the graph, the actinium-228 peak is properly seen on the spectrum.
As a second instance, let’s say we have now discovered this piece of rock:

That is uraninite, a mineral that incorporates a variety of uranium dioxide. Such specimens may be present in some areas of Germany, the Czech Republic, or the US. If we get it within the mineral store, it in all probability has a label on it. However within the subject, it’s often not the case 😉 With a gamma spectrum, we will see a picture like this:

By evaluating the peaks with identified isotopes, we will inform that the rock incorporates uranium, however, for instance, not thorium.
A bodily clarification of the gamma spectrum can be fascinating. As we will see on the graph beneath, gamma rays are literally photons and belong to the identical spectrum as seen mild:

When some individuals suppose that radioactive objects are glowing in the dead of night, it’s really true! Each radioactive materials is certainly glowing with its personal distinctive “shade,” however within the very far and non-visible to the human eye a part of the spectrum.
A second fascinating factor is that solely 10-20 years in the past, gamma-spectroscopy was obtainable just for establishments and massive labs (in one of the best case, some used crystals with unknown high quality may very well be discovered on eBay). These days, because of developments in electronics, a scintillation detector may be bought for the value of a mid-range smartphone.
Now, let’s return to our mission. As we will see from the 2 examples above, the spectra of various objects are totally different. Let’s create a machine studying mannequin that may robotically detect varied parts.
2. Gathering the Knowledge
As readers can guess, our first problem is gathering the samples. I’m not a nuclear establishment, and I don’t have entry to the calibrated take a look at sources like cesium or strontium. Nevertheless, for our process, it isn’t required, and a few supplies may be legally discovered and bought. For instance, americium continues to be utilized in smoke detectors; radium was utilized in portray the watch dials earlier than the Sixties; uranium was broadly utilized in glass manufacturing earlier than the Nineteen Fifties, and thoriated tungsten rods are nonetheless produced at present and may be bought from Amazon. Even the pure uranium ore may be bought within the mineral retailers; nevertheless, it requires a bit extra security precautions. And a advantage of gamma-spectroscopy is that we don’t have to disassemble or break the objects, and the method is usually secure.
The second problem is gathering the information. For those who work in e-commerce, then it’s often not an issue, and each SQL request will return tens of millions of information. Alas, within the “actual world,” it may be far more difficult. Particularly if you wish to make a database of the radioactive supplies. In our case, gathering each spectrum requires 10-20 minutes. For each take a look at object, it might be good to have a minimum of 10 information. As we will see, the method can take hours, and having tens of millions of information isn’t a sensible possibility.
For getting the spectrum knowledge, I will probably be utilizing a Radiacode 103G scintillation detector and an open-source radiacode library.

A gamma spectrum may be exported in XML format utilizing the official Radiacode Android app, however the guide course of is simply too sluggish and tedious. As an alternative, I created a Python script that collects the spectra utilizing random time intervals:
from radiacode import RadiaCode, RawData, Spectrum
def read_forever(rc: RadiaCode):
""" Learn knowledge from the system """
whereas True:
interval_sec = random.randint(10*60, 30*60)
read_spectrum(rc, interval_sec)
def read_spectrum(rc: RadiaCode, interval: int):
""" Learn and save spectrum """
rc.spectrum_reset()
# Learn
dt = datetime.datetime.now()
filename = dt.strftime("spectrum-%YpercentmpercentdpercentHpercentMpercentS.json")
logging.debug(f"Making spectrum for {interval // 60} min")
# Wait
t_start = time.monotonic()
whereas time.monotonic() - t_start < interval:
show_device_data(rc)
time.sleep(0.4)
# Save
spectrum: Spectrum = rc.spectrum()
spectrum_save(spectrum, filename)
def show_device_data(rc: RadiaCode):
""" Get CPS (counts per second) values """
knowledge = rc.data_buf()
for report in knowledge:
if isinstance(report, RawData):
log_str = f"CPS: {int(report.count_rate)}"
logging.debug(log_str)
def spectrum_save(spectrum: Spectrum, filename: str):
""" Save spectrum knowledge to log """
duration_sec = spectrum.period.total_seconds()
knowledge = {
"a0": spectrum.a0,
"a1": spectrum.a1,
"a2": spectrum.a2,
"counts": spectrum.counts,
"period": duration_sec,
}
with open(filename, "w") as f_out:
json.dump(knowledge, f_out, indent=4)
logging.debug(f"File '{filename}' saved")
rc = RadiaCode()
app.read_forever()
Some error dealing with is omitted right here for readability causes. A hyperlink to the total supply code may be discovered on the finish of the article.
As we will see, I randomly choose the time between 10 and half-hour, accumulate the gamma spectrum knowledge, and put it aside to a JSON file. Now, I solely want to put a Radiacode detector close to the item and go away the script working for a number of hours. In consequence, 10-20 JSON information will probably be saved. I additionally have to repeat the method for each pattern I’ve. As a remaining output, 100-200 information may be collected. It’s nonetheless not tens of millions, however as we’ll see, it’s sufficient for our process.
3. Coaching the Mannequin
When the information from the earlier step is prepared, we will begin coaching the mannequin. As a reminder, all information can be found on Kaggle, and readers are welcome to make their very own fashions as properly.
First, let’s preprocess the information and extract the options we need to use.
3.1 Knowledge Load
When the information is collected, we must always have some spectrum information saved in JSON format. A person file appears to be like like this:
{
"a0": 24.524023056030273,
"a1": 2.2699732780456543,
"a2": 0.0004327862989157,
"counts": [ 48, 52, , ..., 0, 35],
"period": 1364.0
}
Right here, the “counts” array is the precise spectrum knowledge. Completely different detectors could have totally different codecs; a Radiacode returns the information within the type of a 1024-channel array. Calibration constants [a0, a1, a2] permit us to transform the channel quantity into the power in keV (kiloelectronvolt).
First, let’s make a way to load the spectrum from a file:
@dataclass
class Spectrum:
""" Radiation spectrum measurement knowledge """
period: int
a0: float
a1: float
a2: float
counts: listing[int]
def channel_to_energy(self, ch: int) -> float:
""" Convert channel quantity to the power degree """
return self.a0 + self.a1 * ch + self.a2 * ch**2
def energy_to_channel(self, e: float):
""" Convert power to the channel quantity (inverse E = a0 + a1*C + a2 C^2) """
c = self.a0 - e
return int(
(np.sqrt(self.a1**2 - 4 * self.a2 * c) - self.a1) / (2 * self.a2)
)
def load_spectrum_json(filename: str) -> Spectrum:
""" Load spectrum from a json file """
with open(filename) as f_in:
knowledge = json.load(f_in)
return Spectrum(
a0=knowledge["a0"], a1=knowledge["a1"], a2=knowledge["a2"],
counts=knowledge["counts"],
period=int(knowledge["duration"]),
)
Now, we will draw it with Matplotlib:
import matplotlib.pyplot as plt
def draw_simple_spectrum(spectrum: Spectrum, title: Non-obligatory[str] = None):
""" Draw spectrum obtained from the Radiacode """
fig, ax = plt.subplots(figsize=(12, 3))
ax.spines["top"].set_color("lightgray")
ax.spines["right"].set_color("lightgray")
counts = spectrum.counts
power = [spectrum.channel_to_energy(x) for x in range(len(counts))]
# Bars
ax.bar(power, counts, width=3.0, label="Counts")
# X values
ticks_x = [
spectrum.channel_to_energy(ch) for ch in range(0, len(counts), len(counts) // 20)
]
labels_x = [f"{ch:.1f}" for ch in ticks_x]
ax.set_xticks(ticks_x, labels=labels_x)
ax.set_xlim(power[0], power[-1])
plt.ylim(0, None)
title_str = "Gamma-spectrum" if title is None else title
ax.set_title(title_str)
ax.set_xlabel("Vitality, keV")
plt.legend()
fig.tight_layout()
sp = load_spectrum_json("thorium-20250617012217.json")
draw_simple_spectrum(sp)
The output appears to be like like this:

What can we see right here?
As was talked about earlier than, from a typical Geiger counter, we will get solely the variety of detected particles. It tells us if the item is radioactive or not, however no more. From a scintillation detector, we will get the variety of particles grouped by their energies, which is virtually a ready-to-use histogram! A radioactive decay itself is random, so the longer the gathering time, the “smoother” the graph.
3.2 Knowledge Rework
3.2.1 Normalization
Let’s have a look at the spectrum once more:

Right here, the information was collected for about 10 minutes, and the vertical axis incorporates the variety of detected particles. This strategy has a easy downside: the variety of particles isn’t a continuing. It is determined by each the gathering time and the “energy” of the supply. It implies that we could not have 600 particles like on this graph, however 60 or 6000. We will additionally see that the information is a bit noisy. That is particularly seen with a “weak” supply and a brief assortment time.
To remove these points, I made a decision to make use of a two-step pipeline. First, I utilized the Savitzky-Golay filter to scale back the noise:
from scipy.sign import savgol_filter
def smooth_data(knowledge: np.array) -> np.array:
""" Apply 1D smoothing filter to the information array """
window_size = 10
data_out = savgol_filter(
knowledge,
window_length=window_size,
polyorder=2,
)
return np.clip(data_out, a_min=0, a_max=None)
It’s particularly helpful for spectra with brief assortment occasions, the place the peaks usually are not so properly seen.
Second, I normalized a NumPy array to 0..1 by merely dividing its values by the utmost.
A remaining “normalize” technique appears to be like like this:
def normalize(spectrum: Spectrum) -> Spectrum:
""" Normalize knowledge to the vertical vary of 0..1 """
# Easy knowledge
counts = np.array(spectrum.counts).astype(np.float64)
counts = smooth_data(counts)
# Normalize
val_norm = counts.max()
return Spectrum(
period=spectrum.period,
a0 = spectrum.a0,
a1 = spectrum.a1,
a2 = spectrum.a2,
counts = counts/val_norm
)
In consequence, spectra from totally different sources now have an identical scale:

As we will additionally see, the distinction between the 2 samples is sort of seen.
3.2.2 Knowledge Augmentation
Technically, we’re prepared to coach the mannequin. Nevertheless, as we noticed within the “Gathering the information” half, the dataset is fairly small – I’ll have solely 100-200 information in complete. The answer is to enhance the information by including extra artificial samples.
As a easy strategy, I made a decision so as to add some noise to the unique spectra. However how a lot noise ought to we add? I chosen a 680 keV channel as a reference worth, as a result of this half has no attention-grabbing isotopes. Then I added a noise with 50% of the amplitude of that channel. A np.clip name ensures that the information values usually are not destructive (for the quantity of detected particles, it doesn’t make bodily sense).
def add_noise(spectrum: Spectrum) -> Spectrum:
""" Add random noise to the spectrum """
counts = np.array(spectrum.counts)
ch_empty = spectrum.energy_to_channel(680.0)
val_norm = counts[ch_empty]
ampl = val_norm / 2
noise = np.random.regular(0, ampl, counts.form)
data_out = np.clip(counts + noise, min=0)
return Spectrum(
period=spectrum.period,
a0 = spectrum.a0,
a1 = spectrum.a1,
a2 = spectrum.a2,
counts = data_out
)
sp = load_spectrum_json("thorium-20250617012217.json")
sp = add_noise(normalize(sp))
draw_simple_spectrum(sp, filename)
The output appears to be like like this:

As we will see, the noise degree isn’t that massive, so it doesn’t distort the peaks. On the similar time, it provides some variety to the information.
A extra refined strategy will also be used. For instance, some radioactive minerals comprise thorium, uranium, or potassium in numerous proportions. It might be attainable to mix spectra of current samples to get some “new” ones.
3.2.3 Function Extraction
Technically, we will use all 1024 values “as is” as an enter for our ML mannequin. Nevertheless, this strategy has two issues:
- First, it’s redundant – we’re largely solely specifically isotopes. For instance, on the final graph, there’s a good seen peak at 238 keV, which belongs to Lead-212, and a much less seen peak at 338 keV, which belongs to Actinium-228.
- Second, it’s device-specific. I desire a mannequin to be common. Utilizing solely the energies of the chosen isotopes as enter permits us to make use of any gamma spectrometer mannequin.
Lastly, I created this listing of isotopes:
isotopes = [
# Americium
("Am-241", 59.5),
# Potassium
("K-40", 1460.0),
# Radium
("Ra-226", 186.2),
("Pb-214", 242.0),
("Pb-214", 295.2),
("Pb-214", 351.9),
("Bi-214", 609.3),
("Bi-214", 1120.3),
("Bi-214", 1764.5),
# Thorium
("Pb-212", 238.6),
("Ac-228", 338.2),
("TI-208", 583.2),
("AC-228", 911.2),
("AC-228", 969.0),
# Uranium
("Th-234", 63.3),
("Th-231", 84.2),
("Th-234", 92.4),
("Th-234", 92.8),
("U-235", 143.8),
("U-235", 185.7),
("U-235", 205.3),
("Pa-234m", 766.4),
("Pa-234m", 1000.9),
]
def isotopes_save(filename: str):
""" Save isotopes listing to a file """
with open(filename, "w") as f_out:
json.dump(isotopes, f_out)
Solely spectrum values for these isotopes will probably be used as enter for the mannequin. I additionally created a way to avoid wasting an inventory into the JSON file – it is going to be used to load the mannequin later. Some isotopes, like Uranium-235, could also be current in minuscule quantities and never be virtually detectable. Readers are welcome to enhance the listing on their very own.
Now, let’s create a way that converts a Radiacode spectrum to an inventory of options:
def get_features(spectrum: Spectrum, isotopes: Listing) -> np.array:
""" Extract options from the spectrum """
energies = [energy for _, energy in isotopes]
knowledge = [spectrum.counts[spectrum.energy_to_channel(energy)] for power in energies]
return np.array(knowledge)
Virtually, we transformed the listing of 1024 values to a NumPy array with solely 23 parts, which is an efficient dimension discount!
3.3 Coaching
Lastly, we’re prepared to coach the ML mannequin.
First, let’s mix all information into one dataset. Virtually, it is determined by the samples you could have and should appear like this:
all_files = [
("Americium", glob.glob("../data/train/americium*.json")),
("Radium", glob.glob("../data/train/radium*.json")),
("Thorium", glob.glob("../data/train/thorium*.json")),
("Uranium Glass", glob.glob("../data/train/uraniumGlass*.json")),
("Uranium Glaze", glob.glob("../data/train/uraniumGlaze*.json")),
("Uraninite", glob.glob("../data/train/uraninite*.json")),
("Background", glob.glob("../data/train/background*.json")),
]
def prepare_data(augmentation: int) -> Tuple[np.array, np.array]:
""" Put together knowledge for coaching """
x, y = [], []
for identify, information in all_files:
for filename in information:
print(f"Processing {filename}...")
sp = normalize(load_spectrum(filename))
for _ in vary(augmentation):
sp_out = add_noise(sp)
x.append(get_features(sp_out, isotopes))
y.append(identify)
return np.array(x), np.array(y)
X_train, y_train = prepare_data(augmentation=10)
As we will see, our y-values comprise names like “Americium.” I’ll use a LabelEncoder to transform them into numeric values:
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.match(y_train)
y_train = le.remodel(y_train)
print("X_train:", X_train.form)
#> (1900, 23)
print("y_train:", y_train.form)
#> (1900,)
I made a decision to make use of an open-source XGBoost mannequin, which is predicated on gradient tree boosting (original paper link). I can even use a GridSearchCV to seek out optimum parameters:
from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV
bst = XGBClassifier(n_estimators=10, max_depth=2, learning_rate=1)
clf = GridSearchCV(
bst,
{
"max_depth": [1, 2, 3, 4],
"n_estimators": vary(2, 20),
"learning_rate": [0.001, 0.01, 0.1, 1.0, 10.0]
},
verbose=1,
n_jobs=1,
cv=3,
)
clf.match(X_train, y_train)
print("best_score:", clf.best_score_)
#> best_score: 0.99474
print("best_params:", clf.best_params_)
#> best_params: {'learning_rate': 1.0, 'max_depth': 1, 'n_estimators': 9}
Final however not least, I would like to avoid wasting the skilled mannequin:
isotopes_save("../fashions/V1/isotopes.json")
bst.save_model("../fashions/V1/XGBClassifier.json")
np.save("../fashions/V1/LabelEncoder.npy", le.classes_)
Clearly, we’d like not solely the mannequin itself but in addition the listing of isotopes and labels. If we modify one thing, the information is not going to match anymore, and the mannequin will produce rubbish, so mannequin versioning is our good friend!
To confirm the outcomes, I would like knowledge that the mannequin didn’t “see” earlier than. I already collected a number of XML information utilizing the Radiacode Android app, and only for enjoyable, I made a decision to make use of them for testing.
First, I created a way to load the information:
import xmltodict
def load_spectrum_xml(file_path: str) -> Spectrum:
""" Load the spectrum from a Radiacode Android app file """
with open(file_path) as f_in:
doc = xmltodict.parse(f_in.learn())
end result = doc["ResultDataFile"]["ResultDataList"]["ResultData"]
spectrum = end result["EnergySpectrum"]
cal = spectrum["EnergyCalibration"]["Coefficients"]["Coefficient"]
a0, a1, a2 = float(cal[0]), float(cal[1]), float(cal[2])
period = int(spectrum["MeasurementTime"])
knowledge = spectrum["Spectrum"]["DataPoint"]
return Spectrum(
period=period,
a0=a0, a1=a1, a2=a2,
counts=[int(x) for x in data],
)
It has the identical spectra values that I used within the JSON information, with some additional knowledge that’s not required for our process.
Virtually, that is an instance of information assortment. This Victorian creamer from the Eighteen Nineties is 130 years outdated, and belief me, you can’t get this knowledge through the use of an SQL request 🙂

This uranium glass is barely radioactive (the background degree is about 0,08 µSv/h), but it surely’s at a secure degree and can’t produce any hurt.
The take a look at code itself is straightforward:
# Load mannequin
bst = XGBClassifier()
bst.load_model("../fashions/V1/XGBClassifier.json")
isotopes = isotopes_load("../fashions/V1/isotopes.json")
le = LabelEncoder()
le.classes_ = np.load("../fashions/V1/LabelEncoder.npy")
# Load knowledge
test_data = [
["../data/test/background1.xml", "../data/test/background2.xml"],
["../data/test/thorium1.xml", "../data/test/thorium2.xml"],
["../data/test/uraniumGlass1.xml", "../data/test/uraniumGlass2.xml"],
...
]
# Predict
for group in test_data:
knowledge = []
for filename in group:
spectrum = load_spectrum(filename)
options = get_features(normalize(spectrum), isotopes)
knowledge.append(options)
X_test = np.array(knowledge)
preds = bst.predict(X_test)
preds = le.inverse_transform(preds)
print(preds)
#> ['Background' 'Background']
#> ['Thorium' 'Thorium']
#> ['Uranium Glass' 'Uranium Glass']
#> ...
Right here, I additionally grouped the values from totally different samples and used batch prediction.
As we will see, all outcomes are appropriate. I used to be additionally going to make a confusion matrix, however a minimum of for my comparatively small variety of samples, all objects had been detected correctly.
4. Testing
As a remaining a part of this text, let’s use the mannequin in real-time with a Radiacode system.
The code is nearly the identical as in the beginning of the article, so I’ll present solely the essential components. Utilizing the radiacode library, I connect with the system, learn the spectra as soon as per minute, and use these values to foretell the isotopes:
from radiacode import RadiaCode, RealTimeData, Spectrum
import logging
le = LabelEncoder()
le.classes_ = np.load("../fashions/V1/LabelEncoder.npy")
isotopes = isotopes_load("../fashions/V1/isotopes.json")
bst = XGBClassifier()
bst.load_model("../fashions/V1/XGBClassifier.json")
def read_spectrum(rc: RadiaCode):
""" Learn spectrum knowledge """
spectrum: Spectrum = rc.spectrum()
logging.debug(f"Spectrum: {spectrum.period} assortment time")
end result = predict_spectrum(spectrum)
logging.debug(f"Predict: {end result}")
def predict_spectrum(sp: Spectrum) -> str:
""" Predict the isotope from a spectrum """
options = get_features(normalize(sp), isotopes)
preds = bst.predict([features])
return le.inverse_transform(preds)[0]
def read_cps(rc: RadiaCode):
""" Learn CPS (counts per second) values """
knowledge = rc.data_buf()
for report in knowledge:
if isinstance(report, RealTimeData):
logging.debug(f"CPS: {report.count_rate:.2f}")
if __name__ == '__main__':
logging.basicConfig(
degree=logging.DEBUG, format="[%(asctime)-15s] %(message)s",
datefmt="%Y-%m-%d %H:%M:%S"
)
rc = RadiaCode()
logging.debug(f"ML mannequin loaded")
fw_version = rc.fw_version()
logging.debug(f"System related:, firmware {fw_version[1]}")
rc.spectrum_reset()
whereas True:
for _ in vary(12):
read_cps(rc)
time.sleep(5.0)
read_spectrum(rc)
Right here, I learn the CPS (counts per second) values from the Radiacode each 5 seconds, simply to make sure that the system works. Each minute, I learn the spectrum and use it with the mannequin.
Earlier than working the app, I positioned the Radiacode detector close to the item:

This classic watch was made within the Nineteen Fifties, and it has radium paint on the digits. Its radiation degree is ~5 occasions the background, however it’s nonetheless inside a secure degree (and it’s really 2 occasions decrease than everybody will get in an airplane throughout a flight).
Now, we will run the code and see the ends in real-time:

As we will see, the mannequin’s prediction is appropriate.
Readers who don’t have a Radiacode {hardware} can use uncooked log information to replay the information. The hyperlink is added to the top of the article.
Conclusion
On this article, I defined the method of making a machine studying mannequin for predicting radioactive isotopes. I additionally examined the mannequin with some radioactive samples that may be legally bought.
I additionally did an interactive HTMX frontend for the mannequin, however this text is already too lengthy. If there’s a public curiosity on this subject, this will probably be revealed within the subsequent half.
As for the mannequin itself, there are a number of methods for enchancment:
- Including extra knowledge samples and isotopes. I’m not a nuclear establishment, and my alternative (from not solely monetary or authorized views, but in addition contemplating the free area in my condo) is restricted. Readers who’ve entry to different isotopes and minerals are welcome to share their knowledge, and I’ll attempt to add it to the mannequin.
- Including extra options. On this mannequin, I normalized all spectra, and it really works properly. Nevertheless, on this manner, we lose the details about the radioactivity degree of the objects. For instance, the uranium glass has a a lot decrease radiation degree in comparison with the uranium ore. To differentiate these objects extra successfully, we will add the radioactivity degree as a further mannequin function.
- Testing different mannequin sorts. It appears to be like promising to make use of a vector search to seek out the closest embeddings. It will also be extra interpretable, and the mannequin can present a number of closest isotopes. A library like FAISS may be helpful for that. One other manner is to make use of a deep studying mannequin, which will also be attention-grabbing to check.
On this article, I used a Radiacode radiation detector. It’s a good system that enables making some attention-grabbing experiments (disclaimer: I don’t have any revenue or different industrial curiosity from its gross sales). For these readers who don’t have a Radiacode {hardware}, all collected knowledge is freely available on Kaggle.
The total supply code for this text is out there on my Patreon page. This assist helps me to purchase tools or electronics for future checks. And readers are additionally welcome to attach by way of LinkedIn, the place I periodically publish smaller posts that aren’t large enough for a full article.
Thanks for studying.