Simply stated: No. They are two different things. sRGB is a transfer curve to convert linear render data to something that is viewable (looks correct to our eyes). sRGB is an extremely narrow colour gamut/profile that was (if I recall correctly) based on the average screen colour range years and years ago.
We shouldn't be using sRGB in our rendering pipeline at all - only at the end when saving the work to a jpg or png or tiff, or similar.
Thomas made an interesting post about this a few days ago:
I quote Troy_s here. (This is a somewhat old answer, but still relevant to understand the relationship between linear and sRGB. Nowadays we work within a Filmic/ACES/wide gamut OPenColorIO configuration "to bring a significant dynamic range and lighting capabilities to your work: a closer-to-photorealistic view transform for your renders" and for rendering to HDR display technology.
"
Two Sides to Colour
At the core of colour and how it is encoded into data are two loose concepts.
The first is the colour of the light, or in colour nerd terms,
chromaticity. When we think of an RGB triplet of data, we have to ask ourselves what the values mean. RGB data is merely an arbitrary numerical representation of three primary lights for each of the red, green, and blue channels. If I were to say "Hey, grab a flashlight in your house that looks red and turn the slider up to 0.8-ish.", that is roughly what we are communicating when we fail to use colour spaces and colour management; we are communicating absolutely nothing.
Without associating the RGB encoded values to a colour space, we are speaking gibberish. We know nothing about the actual colour of the light on your flashlight, nor do we know what the heck 0.8 means in any meaningful representation.
In addition to the chromaticities of the primary lights, we also have what is known as
a transfer function, sometimes erroneously called "gamma." This is a term that describes how the values relate to each other in intensity, where the ground truth is a radiometrically linear quantity of light in relation to a scene or a display’s output. 0.8 in our example above tells us absolutely nothing about how intense the value is in relation to the other values because we don't really know anything about the encoding system used nor how the transfer function is defined.
Colour Space
To solve the above issues, we need more information than purely the relative RGB data. We need to communicate what the values actually mean in relation to some known standards or ratios. If we know what the colours of the primary RGB lights are in absolute terms, what the transfer function ratios are for the data, and some other aspects, we could call that combination of variables a colour space; an encapsulation of a bunch of additional data not communicated with the relative RGB encoded data.
sRGB is one such creature. sRGB has very clearly defined absolute chromaticities defined for each of the red, green, blue channels, as well as a defined white point. It also includes specifications for the transfer function. These facets allow us to 'decode' the RGB values in an image and properly communicate their intention and handling through a pipeline.
Why Linear?
In practical terms, a render engine is completely blind to the colour of the lights (see chromaticity above) and merely worries about performing math on the data values. A ray tracer attempts to, as best as it can with a tri-colour or
tristumulus model, emulate real-world physics when it comes to rendering.
Real world physics is pretty simple when it comes to math. One unit of light plus one unit of light is, in shockingly, two units of light. Energies operate in a linear fashion, and as such, they behave very rationally.
sRGB on the other hand, for various historical reasons and circumstances, is a non-linear colour space. That is, the transfer function portion of the colour space is bent in such a way that the values are not linearly related. 0.4 plus 0.4 in sRGB is not twice the radiometric amount of light! To see a quick visual effect of this non-linear nasty math, fill up a background in your favorite imaging application of full green and full blue for a cyan colour. Now take a fuzzy fully red brush and paint across it. See the nasty fringing? That is a result of bad and broken math due to a nonlinear reference space.
So why sRGB? If we use a linearized reference space, and we were to dump those linear values direct to the display, our display's output would look wrong. In particular, it would look extremely dark because we haven't properly "corrected" the values back to a nonlinear response. When we select sRGB in [a 3d render engine], we are telling the program to roll the linearized reference data back out through the correct nonlinear sRGB transfer curve.
If we don't do this, the data is entirely incorrect for storage in a JPEG, a TIFF, or other such nonlinear wrappers. It is also completely incorrect for viewing."