pub struct ReplicatedLayer(/* private fields */);
Expand description
This layer has no parallelization
Implementations§
Source§impl ReplicatedLayer
impl ReplicatedLayer
pub fn from_linear(lin: Linear) -> Result<Arc<dyn QuantMethod>>
pub fn new( in_dim: usize, out_dim: usize, config: &Option<QuantizedConfig>, bias: bool, vb: ShardedVarBuilder<'_>, ) -> Result<Arc<dyn QuantMethod>>
Trait Implementations§
Source§impl Debug for ReplicatedLayer
impl Debug for ReplicatedLayer
Source§impl QuantMethod for ReplicatedLayer
impl QuantMethod for ReplicatedLayer
fn new(_method: QuantMethodConfig) -> Result<Self>where
Self: Sized,
Source§fn forward(&self, a: &Tensor) -> Result<Tensor>
fn forward(&self, a: &Tensor) -> Result<Tensor>
Compute matmul of
self
and a
. self
should contain the weights.Source§fn add_delta_w(&self, delta: &Tensor) -> Result<Arc<dyn QuantMethod>>
fn add_delta_w(&self, delta: &Tensor) -> Result<Arc<dyn QuantMethod>>
Add a delta weight from LoRA to the weights. This should be prescaled with alpha.
fn dequantize_w(&self) -> Result<Tensor>
Source§fn dtype_and_device(&self) -> (DType, Device)
fn dtype_and_device(&self) -> (DType, Device)
Weight dtype and device
Source§fn begin_track_stats(&mut self) -> Result<()>
fn begin_track_stats(&mut self) -> Result<()>
Begin tracking stats into an ImatrixLayerStats
Source§fn end_track_stats(&self) -> Result<Tensor>
fn end_track_stats(&self) -> Result<Tensor>
End tracking stats into an ImatrixLayerStats. Returns the computed imatrix.
Source§fn quantized_act_type(&self) -> Option<DType>
fn quantized_act_type(&self) -> Option<DType>
If a quantized method, return the activation dtype.
fn unquant_weight_bias(&self) -> Option<(Tensor, Option<Tensor>)>
fn get_max_isq_cpu_threads(&self, dtype: IsqType) -> Option<NonZeroUsize>
Source§fn apply_isq(
self: Arc<Self>,
dtype: Option<IsqType>,
device: Device,
n_quantized: &AtomicUsize,
imatrix_weight: Option<Vec<f32>>,
) -> Result<Arc<dyn QuantMethod>>
fn apply_isq( self: Arc<Self>, dtype: Option<IsqType>, device: Device, n_quantized: &AtomicUsize, imatrix_weight: Option<Vec<f32>>, ) -> Result<Arc<dyn QuantMethod>>
If the quant is backed by a qmatmul.
Source§fn forward_autocast(&self, a: &Tensor) -> Result<Tensor>
fn forward_autocast(&self, a: &Tensor) -> Result<Tensor>
Compute matmul of
self
and a
. self
should contain the weights.
Automatically cast to required quantization actiation type and backSource§fn forward_via_half(&self, a: &Tensor) -> Result<Tensor>
fn forward_via_half(&self, a: &Tensor) -> Result<Tensor>
Compute matmul of
self
and a
. self
should contain the weights.
This may go via half precision if it is supported.Source§impl QuantizedSerde for ReplicatedLayer
impl QuantizedSerde for ReplicatedLayer
Auto Trait Implementations§
impl Freeze for ReplicatedLayer
impl !RefUnwindSafe for ReplicatedLayer
impl Send for ReplicatedLayer
impl Sync for ReplicatedLayer
impl Unpin for ReplicatedLayer
impl !UnwindSafe for ReplicatedLayer
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
§impl<T> Instrument for T
impl<T> Instrument for T
§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more