HSUPA
HSUPA
HSUPA
Issue Date
04 2011-09-30
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the customer. All or part of the products, services and features described in this document may not be within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information, and recommendations in this document are provided "AS IS" without warranties, guarantees or representations of any kind, either express or implied. The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied.
Contents
Contents
1 Introduction ................................................................................................................................1-1
1.1 Scope ............................................................................................................................................ 1-1 1.2 Intended Audience ........................................................................................................................ 1-1 1.3 Change History.............................................................................................................................. 1-1
6 Parameters .................................................................................................................................6-1
Issue 04 (2011-09-30)
ii
Contents
Issue 04 (2011-09-30)
iii
1 Introduction
1 Introduction
1.1 Scope
HSUPA is an important feature of 3GPP Release 6. As an uplink high speed data transmission solution, HSUPA provides a theoretical maximum rate of 5.74 Mbit/s on the Uu interface.
Personnel who are familiar with WCDMA basics Personnel who need to understand HSUPA Personnel who work with Huawei products
Feature change: refers to the change in the HSUPA feature. Editorial change: refers to the change in wording or the addition of the information that was not described in the earlier version.
Document Issues
The document issues are as follows:
04 (2011-09-30)
This is the document for the fourth commercial release of RAN12.0. Compared with issue 03 (2011-03-30) of RAN12.0, this issue adds the network impacts about HSUPA, dynamic CE management and HSUPA adaptive retransmission. For details, see 2 Overview of HSUPA, 4.2.4 "Dynamic CE Management and 4.4 HSUPA Adaptive Retransmission.
03 (2011-03-30)
This is the document for the third commercial release of RAN12.0. Compared with issue 02 (2010-12-20) of RAN12.0, this issue optimizes the description.
02 (2010-12-20)
This is the document for the second commercial release of RAN12.0. Diff-Serv Management is moved to Differentiated HSPA Service Feature Parameter Description.
Issue 04 (2011-09-30)
1-1
1 Introduction
01 (2010-03-30)
This is the document for the first commercial release of RAN12.0. Compared with issue Draft (2009-12-05) of RAN12.0, this issue optimizes the description.
Draft (2009-12-05)
This is the draft of the document for RAN12.0. Compared with issue 02 (2009-06-30) of RAN11.0, this issue optimizes the description.
Issue 04 (2011-09-30)
1-2
2 Overview of HSUPA
2 Overview of HSUPA
Since the introduction of the HSDPA technology, the downlink transmission rate has been greatly increased. To meet the rapidly growing demands for data services, 3GPP Release 6 introduced HSUPA. By applying fast scheduling, fast hybrid automatic repeat request (HARQ), shorter transmission time interval (TTI), and macro diversity combining (MDC), HSUPA improves the uplink capacity, increases the user data rate greatly, and reduces the transmission delay on the WCDMA network. HSUPA provides a higher peak rate, which helps to improve user experience. It can also increase system capacity when there is only a small number of UEs transmitting data at high data rates. This feature has the following impacts:
A new control channel that requires more power in the uplink, called E-DPCCH, was introduced for HSUPA. Therefore, activating this feature may increase the probability of call drops in a network with limited uplink coverage. When the uplink load is limited and there is a large number of UEs, the UEs can upload data only at a guaranteed bit rate (GBR), for example, 64 kbit/s. In such a case, the data transmission efficiency of HSUPA channels is slightly decreased, as compared to R99 channels, because the E-DPCCH consumes system resources.
Fast HARQ
Issue 04 (2011-09-30)
2-1
2 Overview of HSUPA
E-AGCH: E-DCH Absolute Grant Channel E-RGCH: E-DCH Relative Grant Channel E-HICH: E-DCH HARQ Acknowledgement Indicator Channel
E-DPCCH: E-DCH Dedicated Physical Control Channel E-DPDCH: E-DCH Dedicated Physical Data Channel UE: User Equipment
The TTI of the enhanced dedicated channel (E-DCH) can be 10 ms or 2 ms. The E-DCH is mapped onto the E-DPDCH or E-DPCCH. When the TTI is 10 ms, the E-DCH provides better uplink coverage performance; when the TTI is 2 ms, the E-DCH provides higher transmission rates. The E-DPDCH carries data in the uplink. The spreading factor of the E-DPDCH varies from SF256 to SF2 depending on the data transmission rate. A maximum of four E-DPDCHs can be used for parallel transmission. The SF of two E-DPDCHs is SF2, and the SF of the other two E-DPDCHs is SF4. The E-DPCCH carries control information related to data transmission in the uplink. The control information consists of the E-DCH transport format combination indicator (E-TFCI), retransmission sequence number (RSN), and happy bit. The SF of the E-DPCCH is fixed to 256. To implement the HARQ function, the E-HICH is introduced in the downlink. The E-HICH carries retransmission requests from the NodeB. The downlink E-AGCH and E-RGCH carry the HSUPA scheduling control information. The E-AGCH is a shared channel, which carries the maximum permissible E-DPDCH to DPCCH power ratio, that is, absolute grants. The E-RGCH is a dedicated channel, which is used to indicate relative grants and increase or decrease the maximum permissible E-DPDCH to DPCCH power ratio.
Issue 04 (2011-09-30)
2-2
2 Overview of HSUPA
In the UE, the MAC-e/es is added, which encapsulates the traffic data into an MAC-e PDU and transmits it on the E-DPDCH. To support HSUPA, 3GPP TS 25.306 defines six UE categories. These UEs support different peak rates at the MAC layer, ranging from 711 kbit/s to 5.74 Mbit/s. Only the UE of category 6 supports the peak rate of 5.74 Mbit/s. E-DCH Category Max. Capability Combination E-DCH TTI Max. Data Rate (Mbit/s) MAC Layer 10 ms TTI Category 1 Category 2 Category 3 Category 4 Category 5 Category 6 1 x SF4 2 x SF4 2 x SF4 2 x SF2 2 x SF2 10 ms only 0.71 MAC Layer 2 ms TTI 1.40 2.89 5.74 0.96 1.92 1.92 3.84 3.84 5.76 Air Interface
Huawei RAN supports all the UE categories. HSUPA 2 ms TTI and parallel transmission of four E-DPDCHs are optional.
Issue 04 (2011-09-30)
2-3
2 Overview of HSUPA
Bearer mapping When a user initiates a service request, bearer mapping is performed to determine an appropriate physical channel for carrying data based on the attributes of the requested service, such as the service domain (CS or PS) and the traffic class and rate. If the service attributes meet the requirement for carrying on the HSUPA channel, the data can be carried on the HSUPA channel. For details on bearer mapping, see the Radio Bearers Feature Parameter Description.
Access control After bearer mapping determines that a service can be carried on the HSUPA channel, access control performs an access decision based on the cell load and the estimated increase in the load after the access of a new service. Access control ensures that a new service connection can obtain required resources after admission. Accordingly, the service quality is ensured, and the cell is not overloaded. If the resources of the cell are insufficient for the service connection setup on the HSUPA channel, or the cell does not support HSUPA, access attempts can be performed in the inter-frequency same-coverage neighboring cell to increase the access probability and improve the QoS. For details on access control, see the Load Control Feature Parameter Description. If access control determines that the service connection can be set up on the HSUPA channel, the system sets up the HSUPA RB for the UE. Thus, the UE can transmit data on the E-DPDCH. The service connection setup ends, and the power control, channel switching, mobility management, and load control are performed to control the UE. The first three functions control the RL of each UE. The last function controls all the UEs in the cell. The four functions work simultaneously.
Power control If a service can be carried on the HSUPA channel, the UE transmits control messages and traffic data to the network on the uplink channels (E-DPCCH and E-DPDCH), and the network transmits signaling to the UE on the downlink channels (E-RGCH, E-AGCH, and E-HICH). Power control assigns appropriate transmit power to each downlink channel to ensure that messages can be correctly received by UEs and to avoid wasting resources because of very high transmit power. Power control is also used to control the transmit power of the uplink channels to ensure the transmission quality of uplink data on the Uu interface. For details on power control, see the Power Control Feature Parameter Description.
Issue 04 (2011-09-30)
2-4
2 Overview of HSUPA
Channel switching Channel switching monitors the data transmission requests of UEs in real time, estimates the change in the demands for system resources, and then adjusts the channel bandwidth or performs state transition based on the estimation. This function ensures the QoS and saves system resources. For details, see section 3.2 "Channel Switching."
Mobility management Mobility management processes transactions due to UE movement between cells. For example, when a UE moves from one cell to another cell, the serving cell must be switched to ensure service continuity; when a UE moves between a cell capable of HSUPA and a cell incapable of HSUPA, the HSUPA channel must be configured or removed on the basis of the cell capability to ensure service continuity and service quality. For details on mobility management, see the Handover Feature Parameter Description.
Load control Load control monitors the cell load in real time. If the load exceeds the congestion threshold, UEs in the cell can be handed over to other cells or the data rate can be decreased to reduce the load and reserve resources for subsequent access, thus increasing the connection success rate. If the load further increases and exceeds the overload threshold, the RL can be released to reduce the load rapidly to ensure system stability. Load control can be performed for UEs with the traffic being carried on the HSUPA channel to reduce the system load. For details on load control, see the Load Control Feature Parameter Description.
Function Description
On the UE side, after the service data at the application layer is passed to the RLC layer, the service data is segmented or concatenated to an RLC PDU and then passed to the physical layer for transmission.
ETFC selection
Issue 04 (2011-09-30)
2-5
2 Overview of HSUPA
In each TTI, whether the UE can transmit data and how much data can be transmitted depend on E-DCH transfer format combination (ETFC) selection based on the following factors:
The The
serving grant (SG) from the NodeB. The SG defines the maximum permissible power used to transmit data on the E-DPDCH, which carries the MAC-e PDU. The larger the SG is, the bigger the MAC-e TB size to be supported in one transmission time interval is. traffic volume in the RLC buffer of the UE.
The
HARQ After ETFC selection determines the amount of data to be transmitted in the current TTI, the RLC PDU is encapsulated into the MAC-es PDU and then into the MAC-e PDU, and then the MAC-e PDU is passed to the HARQ entity. The HARQ entity transmits the MAC-e PDU on the E-DPDCH. If the MAC-e PDU is erroneously received by the cell, the HARQ entity retransmits it until it is correctly received or the retransmission times reach the predefined maximum times. On the NodeB side, if the decoding of an MAC-e PDU fails, the HARQ reception process of the NodeB buffers the data received each time, performs maximum ratio combining (MRC) of received data, and then performs decoding. This increases the probability of correctly receiving packets. The network can control the retransmission times dynamically to ensure the correct reception of MAC-e PDUs. For details, see section 4.4 "HSUPA Adaptive Retransmission."
Flow control After the data is correctly received by the NodeB, it is passed to the RNC. There is an Iub interface between the RNC and the NodeB, and the bandwidth on this interface may be limited. Therefore, the NodeB needs to allocate Iub bandwidth among UEs properly through the flow control function to avoid deterioration of transmission quality due to congestion on the Iub interface. If a UE in soft handover establishes several RLs towards the cells under different RNCs, the flow control function also needs to be implemented on the Iur interface to avoid congestion on the Iur interface. For details on flow control, see the Transmission Resource Management Feature Parameter Description.
MDC If a UE is performing soft handover, the same MAC-es PDU may be passed to the RNC from several NodeBs over several Iub interfaces or passed to the SRNC from the DRNC over the Iur interface. In this case, the SRNC performs the MDC function to combine the same data, thus increasing the probability of correct reception.
Fast scheduling Fast scheduling is very important on the NodeB side. The NodeB assigns the SG to each HSUPA UE through fast scheduling to control the maximum transmission rate of each UE on the Uu interface. Fast scheduling has a direct impact on the QoS of each UE. Thus, when the NodeB assigns SGs, it must consider the available system resources and the QoS requirement of each UE. That is, when ensuring the QoS of each UE, fast scheduling maximizes the utilization of system resources. For details on fast scheduling, see section 4.1 "Fast Scheduling."
CE management CE resources are NodeB hard resources used for demodulating HSUPA data. CE resources usually become the bottlenecks for system resources. When the CE resources are limited, it is important to logically use the limited resources to meet the QoS requirement of each UE. CE management considers this factor, outputs the information about CE resource allocation for each UE, and then passes it to the scheduling module as a reference for Uu resource allocation. For details on CE management, see section 4.2 "CE Resource Management."
Issue 04 (2011-09-30)
2-6
2 Overview of HSUPA
The Iub flow control and CE management functions allocate appropriate resources to each UE based on the amount of available resources and the QoS requirement of each UE. Then, based on the resource allocation, the two functions calculate the maximum rate of each UE supported by the Iub resources and CE resources. The scheduling module provides an SG for the UE based on the received maximum rate. The SG ensures that the maximum rate of the HSUPA UE on the Uu interface does not exceed the received maximum rate, thus avoiding Iub resource or CE resource congestion.
Issue 04 (2011-09-30)
2-7
During the service setup, the RNC selects appropriate channels based on the UE capability, cell capability, and service parameters to optimize the utilization of cell resources and ensure the QoS. Huawei RAN supports the setting of the types of RABs carried on the E-DCH according to service requirements. For details, see the Radio Bearers Feature Parameter Description
Issue 04 (2011-09-30)
3-1
Table 3-2 lists the mapping between new state transition and new channel switching. Table 3-2 Mapping between new state transition and new channel switching New State Transition CELL_DCH (with E-DCH) <-> CELL_FACH CELL_DCH (with E-DCH) <-> CELL_DCH New Channel Switching E-DCH <-> FACH E-DCH <-> DCH
The switching between E-DCH and FACH and the switching between E-DCH and DCH can be triggered in the following cases:
The UE activity changes. The UE activity is measured by the amount of data to be transmitted by the UE. When the traffic volume is high, the UE is in the high activity state, and thus high-rate transport channels are required to provide high service quality. When the traffic volume is low, the UE is in the low activity state, and thus low-rate channels or shared channels can be used to reduce the resource usage. The system capability changes during the movement of the UE. When the UE moves from an HSPA cell to a non-HSPA cell, the non-HSPA channel must be configured to ensure service continuity. When the UE moves from a non-HSPA cell to an HSPA cell, the HSPA channel must be configured if the UE supports HSPA to provide high service quality. The system load changes. When the system enters the overload state, the UE in the CELL_DCH state may be switched to the common channel to reduce resource consumption and ensure system stability. This is applicable to the UE on the HSPA channel. When the UE attempts an access, the access may be rejected because of HSPA overload. Therefore, the UE accesses the network on the DCH. When the system load becomes normal, the UE is switched back to the HSPA channel.
Issue 04 (2011-09-30)
3-2
bandwidth and UE state are based on the throughput change. For details, see the DCCC Feature Parameter Description.
The original best cell supports HSUPA, and the traffic of the UE is carried on the E-DCH. In addition:
The The
new best cell does not support HSUPA. In this case, the UE cannot set up the E-DCH towards the new best cell, and channel switching from E-DCH to DCH is performed. new best cell supports HSUPA, but a new HSUPA connection fails to be set up because of insufficient resources. In this case, a DCH is set up towards the new best cell, and thus channel switching form E-DCH to DCH is performed.
The original best cell does not support HSUPA, and the traffic is carried on the DCH. When a cell supporting HSUPA becomes the best cell and the traffic can be carried on the E-DCH, channel switching from DCH to E-DCH is performed.
When the traffic that can be carried on the HSUPA channel is carried on the DCH in the previous cases, channel switching from DCH to E-DCH may be triggered if the following conditions are met:
The UE supports HSUPA. The current cell supports HSUPA or the inter-frequency same-coverage neighboring cell supports HSUPA.
The channel switching mechanisms are as follows: Channel switching based on timer: After the DCH is set up, this mechanism periodically attempts to perform channel switching from DCH to E-DCH.
Issue 04 (2011-09-30)
3-3
WRFD-01061209 HSUPA HARQ and Fast UL Scheduling in Node B WRFD-01061402 Enhanced Fast UL Scheduling
The service carried on the E-DCH can be configured in two modes, non-scheduling mode and scheduling mode.
Non-scheduling mode: Dedicated resource is reserved for the service to support its maximum bit rate (MBR). That is, the transmission rate is the MBR if the service has enough traffic volume. This mode can guarantee the QoS very well.
When
the service is transmitted in non-scheduling mode, the maximum available rate of the service is defined in service connection establishment and is not controlled by the scheduling module. In this mode, based on the MBR requested by the service, the RNC configures the available transport block with the maximum size for the service. If the transmit power permits, the UE can use the transport block with the maximum size to transmit data. The non-scheduling mode is generally applicable to services sensitive to delay or with constant source rates, such as VoIP.
Scheduling mode: The available maximum transmission rate is controlled by the HSUPA scheduler in the NodeB and it can be adjusted frequently by the HSUPA scheduler. If the cell resource is enough, the service configured in scheduling mode can get a bit rate in the range from the guaranteed bit rate (GBR) to the MBR. If the cell resource is congested, the bit rate is not higher than the GBR.
When
the service is transmitted in scheduling mode, the scheduling module, based on the load on the Uu interface and the CE resource and Iub bandwidth allocation results, adjusts the transmission rate of the HSUPA UE by controlling the SG to be assigned to the HSUPA UE. The scheduling module is applicable to services with variable source rates, such as the interactive service and background service.
Only the services configured in scheduling mode are controlled by the scheduling module. Thus, the subsequent description is based on the services carried on the E-DCH and configured in scheduling mode.For detailed information about how to configure scheduling mode for different traffice classes, see Radio Bearers Feature Parameter Description. The scheduling period is based on the TTI. The scheduling period is 10 ms for 10 ms TTI and 2 ms for 2ms TTI.
Issue 04 (2011-09-30)
4-1
Estimation of Uu load
When
the scheduling function is performed, NodeB estimates the Uu resources Uu interface load = Uu load corresponding to RTWP remaining load = MaxTargetUlLoadFactor Uu load corresponding to RTWP
Estimated Estimated In
this expression, MaxTargetUlLoadFactor is a parameter configured in the RNC. It specifies the target level of uplink load control. The estimated remaining load indicates the load resources on the Uu interface that can be allocated to UEs. CE management module can request the HSUPA scheduling module to decrease the rates of some UEs dual to lack of CE resources. The HSUPA scheduling module needs to send the AG or RG 'Down' message to the associated UEs. Iub flow control module can request the HSUPA scheduling module to decrease the transmission rates of some UEs due to Iub congestion. The HSUPA scheduling module needs to send the RG 'Down' message to the associated UEs. module sends the RG 'Down' message to each UE of which the data rate exceeds the MBR on the Uu interface to decrease the data rate.
Dynamic CE management
Dynamic
4.1.2 Queuing
The main purpose of Uu resource scheduling is to efficiently utilize Uu resources, try to provide satisfactory QoS for more users, and provide differentiated services. The HSUPA scheduling has two sub-functions: scheduling queuing and Uu resource allocation. The purposes of UE queuing are as follows:
To ensure the QoS of UEs: to try to ensure that the UEs can obtain the guaranteed bit rate (GBR) To differentiate UEs: to ensure that the UEs with a great scheduling priority indicator (SPI) weight obtain resources preferentially To ensure fairness: to ensure that the UEs with the same SPI weight have the same opportunity to obtain resources
Issue 04 (2011-09-30)
4-2
The priority of users with the effective rate higher than the GBR is lower than the priority of users with the effective rate lower than the GBR. The priority of users with a greater SPI weight is higher; the priority of users with a smaller SPI weight is lower. This helps provide differentiated services. Happy Bit: reported by the UE. Effective rate: The actual rate obtained by the UE is monitored by the NodeB on a real-time basis. GBR: The GBR is configured on the basis of user priorities. Generally, the GBR increases with the user priority. The GBR can be set through the SET UUSERGBR and ADD UCELLEFACH commands on the RNC. SPI weight: The SPI weight (SPIweight) is configured on the basis of the SPI. Generally, the SPIweight increases with the SPI. And SPIweight can be configured in the RNC. For details, see section 5.2 "Diff-Serv Management."
Maximum CE resources in one NodeB depend on the baseband card capabilities and on the CE licenses (ul, dl). The CE licenses in one NodeB can be shared between carriers and sectors
Common Channels have dedicated CE resources (No impact on CE license) HSDPA (UL, DL) have dedicated CE resources (No impact on CE license) R99 DCH (UL, DL) and HSUPA take resources from CE license R99 and HSUPA channels share the same pool of CE resources
Huawei Proprietary and Confidential Copyright Huawei Technologies Co., Ltd 4-3
Issue 04 (2011-09-30)
4.2.2 CE Sharing
A cell must be set up in an uplink resource group (ULGROUPN) and a downlink resource group (DLGROUPN) respectively. In the uplink, CE resources can be shared in one resource group dynamically. In the downlink, CE resources can be shared in one board. A cell can be set up in only one board of the downlink resource group.
When the rate of the service source decreases, the redundant CE resources are called back. When there is a need to increase the service rate, CE resources are reserved. When there are insufficient available CE resources, CE resources are allocated to users in the serving RLS preferentially because the QoS of users depends on the resource allocation of the serving cell.
Huawei Proprietary and Confidential Copyright Huawei Technologies Co., Ltd 4-4
Issue 04 (2011-09-30)
When the available CE resources are insufficient to meet the requirements of all the users in the serving RLS, user priorities need to be considered to provide differentiated services.
In addition, the dynamic CE management module needs to process messages from external functional modules, such as the resource allocation request during the establishment of a new connection and the channel reassignment request. In such a case, the QoS requirement of users and user priorities must be considered. Dynamic CE management is an optional function and is controlled by the license. There is no function switch parameter. Dynamic CE resource management increases CE resource utilization for the network, improving system capacity. This feature also reduces the service access delay by quickly allocating and retrieving CE resources, improving user experience. With Dynamic CE resource management, the throughput of HSUPA-capable UEs can be increased, leading to a slight increase in Rise of Thermal (ROT), a slight decrease in coverage, and an increased probability of call drops. When CE resources are insufficient, Dynamic CE resource management has a negative impact on the admission success rate. In RAN10.0, Dynamic CE resource management and HSUPA DCCC cannot be enabled at the same time. In RAN11.0 and later, HSUPA DCCC does not take effect in cells in the active set of an HSUPA-capable UE that have dynamic CE resource management enabled. If Dynamic CE Resource Management is enabled for the entire network, disable HSUPA DCCC.
Procedure
Figure 4-4 Procedure of dynamic CE management
The main functions of the dynamic CE management module are shown in Figure 4-4. The procedure of dynamic CE management is as follows: Step 1 Calling back CE resources Based on the actual data rate of users, the CE management module calls back the idle CE resources to improve the utilization of resources and updates the information about available system resources. Step 2 Processing external related messages
Issue 04 (2011-09-30)
4-5
Based on the requirement of external signaling messages, the CE management module allocates appropriate CE resources to users and updates the information about available system resources. Step 3 Increasing CE resources dynamically Based on the estimation of the requirement for CE resources, the CE management module increases CE resources for users with the requirement for increasing CE resources. If the available CE resources are insufficient after the processing in the first two steps, the CE management module provides differentiated services and adjusts CE resources among users dynamically based on user priorities. ----End Steps 1 to 3 can be triggered periodically, and the periods can be different. Step 2 is triggered by events. If several processing tasks are triggered at a time, the CE management module performs the processing by following the procedure shown in Figure 4-4.
Happy Bit reported by the UE Actual rate obtained by the UE through real-time monitoring on the NodeB GBR requirement of the UE SPI The queuing method is the same as that of the HSUPA fast scheduling algorithm. For details, see section 4.1 "Fast Scheduling."
CE Resource Callback
CE resource callback is based on the monitoring of CE resource requirement of each user. For users with the currently allocated CE resources more than the calculated CE requirement, the required CE resources are reserved and the redundant CE resources are called back.
Issue 04 (2011-09-30)
4-6
CE Resource Increase
If the CE resources requested by a user are more than the current allocated CE resources, resources must be increased. In addition, it is possible that the available CE resources do not meet the requirements of all users. In such a case, user priorities need to be considered. For details, see User Priority Queuing in this section. At the same time, the conditions of the serving cell also need to be considered. If the serving cell of the user belongs to the current NodeB, the user is called serving user. If the serving cell of the user does not belong to the current NodeB, the user is called non-serving user. A serving user has a higher priority than a non-serving user because the QoS of the user mainly depends on the QoS obtained in the serving cell. The CE resource increase consists of the following functions:
After the connection is established, serving users cannot preempt resources mutually during the dynamic CE resource allocation. Therefore, it is possible that high-priority users cannot obtain sufficient resources because low-priority users occupy a large amount of resources. In such a case, the resource fairness adjustment must be performed among serving RLs. The method is as follows: If the current available CE resources do not meet the requirement of user rate increase, reduce the CE resources occupied by the serving user with the lowest priority to reserve resources and meet the rate increase requirement of high-priority users.
the monitoring of CE resource requirements shows that the CE resources allocated to the serving users are less than the CE resources requested, CE resources must be increased. the available resources meet the requirements of resource increase, CE resources are allocated according to the requirements of users. resources are insufficient to meet the requirements of resource increase, CE resources are allocated in descending order of user priorities.
If the available
After the resource allocation to serving RLs is complete, CE resources are allocated to non-serving users requesting resource increase in descending order of user priorities only when there are redundant resources.
4.3.1 Background
This feature is intended for the NodeB. In the case of soft handovers, if the serving cell of the user belongs to the current NodeB, the user is called serving user. If the serving cell of the user does not belong to the current NodeB, the user is called non-serving user. Though soft handovers can improve the reception quality of HSUPA link data, the quality is improved at the price of multiplied resource consumption, especially for Iub resources and CE resources. These two types of resources are the bottlenecks for system resources. The service quality mainly depends on the reception quality in the serving cell. Therefore, when the Iub resources and CE resources are insufficient to meet the data transmission requirements of all the users, the non-serving RL resources are allocated to the serving RLs of other users at the price of soft
Issue 04 (2011-09-30)
4-7
handover gains of some users. This improves the utilization of resources and increases the total throughput. If the Iub resources or CE resources are limited, this feature reduces the resources allocated to the non-serving users to ensure the service quality of serving users and increase the capacity of the entire system. This is an optional function and is controlled by the license. There is no function switch parameter.
When the Iub resources are in the normal state, each user is allocated Iub resources uniformly. For details, see the Transmission Resource Management Feature Parameter Description. When the Iub resources are in the congestion state, most resources are allocated to serving users.
This resource allocation policy ensures the service quality of serving users preferentially, and it is applicable only to BE services.
When a serving RL is established or added, the RL preempts the resources allocated to non-serving users preferentially if the idle resources do not meet its requirement. If the resources do not meet the requirement even after the resources of non-serving users that can be preempted are all preempted, the RL preempts resources of serving users. When a non-serving RL is established or added, the non-serving RL can preempt only the resources occupied by non-serving users if the idle resources do not meet its requirement.
Dynamic CE resource allocation is responsible for monitoring the requirement of each user for CE resources and increasing CE resources for the user with the currently allocated CE resources less than the CE resources requested.
Dynamic CE resource allocation increases CE resources for serving users preferentially. If the idle CE resources do not meet the requirement of serving users for CE resource increase, serving users can preempt CE resources of non-serving users. If there are idle CE resources even after the requirement of serving users for CE resources is met, dynamic CE resource allocation increases resources according to the requirement of non-serving users for CE resources.
During the preemption process, the CE resources of non-serving users can be reduced to only one CE. That is, E-DPCCH demodulation is supported, but the E-DPDCH carrying data is not demodulated. The CE resources of serving RL must meet or exceed the GBR demodulation requirement.
4.4.1 Background
The retransmission times of the MAC-e PDU are the average retransmission times at which the MAC-e PDU of the user can be correctly received by the NodeB. The UTRAN can adjust the interference of an
Issue 04 (2011-09-30) Huawei Proprietary and Confidential Copyright Huawei Technologies Co., Ltd 4-8
UE to the cell and the requirement of service transmission for UE transmit power by setting the different target retransmission times of MAC-e PDU. The reasons are as follows:
If the signal-to-noise ratio (Eb/No) of the uplink E-DPDCH carrying the MAC-e PDU that reaches the NodeB each time is increased, the probability in which the MAC-e PDU is correctly received is also increased. This decreases the required retransmission times, but increases the interference to the NodeB. If the signal-to-noise ratio of the uplink E-DPDCH that reaches the NodeB each time is reduced, the retransmission times required for correctly receiving the MAC-e PDU are increased, but the interference to the NodeB is reduced.
The HSUPA power control algorithm can adjust the signal-to-noise ratio of the E-DPDCH that reaches the NodeB each time by comparing the actual retransmission times with the target retransmission times to control the retransmission times of the MAC-e PDU within the target value range. In this manner, the interference level of the UE to the system is adjusted, and the requirement for UE transmit power is also adjusted. Generally, to enable the user to obtain a higher rate, set the target retransmission times to a smaller value at the price of increasing the load properly. This case is called "small target retransmission times". The "small target retransmission times" configuration, however, may have a negative effect in the following cases:
The UE transmit power is limited. When the UE moves to the edge of the cell, the transmit power is not enough. Therefore, the probability in which the MAC-e PDU is correctly received is reduced, and therefore the UE throughput is reduced promptly. The cell load is limited. When the load of the cell serving the UE is high, the scheduling algorithm may no longer provide additional rate grants for the UE. Therefore, the UE throughput is also limited.
HSUPA adaptive retransmission increases the target retransmission times adaptively based on the previous two cases to achieve the following purposes:
Increasing retransmission times can obtain the gain of time diversity, reduce the requirement for UE transmit power, enlarge the coverage range, and increase the UE throughput. Reducing interference of the UE to the system enables the UE to obtain a higher rate grant and increase the UE throughput, thus increasing the cell throughput and uplink capacity.
When the problems in the previous two cases are solved, HSUPA adaptive retransmission restores the target retransmission times to a smaller value. Accordingly, the transmit power resources of the UE and the load resources of the cell can be fully used to enable the UE rate to approach or reach the throughput limit, thus improving user experience. HSUPA adaptive retransmission is an optional function and is controlled by the license. HSUPA adaptive retransmission helps to increase the throughput per user and the uplink capacity of a cell. It has the following benefits:
When uplink coverage is limited, it increases the uplink throughput of UEs at the cell edge. When the uplink power load in a cell is limited, it increases the target number of uplink retransmissions to increase the uplink throughput and capacity of the cell.
When CE resources are insufficient, HSUPA adaptive retransmission does not adjust the target number of uplink retransmissions. As a result, it does not increase the uplink cell capacity.
Issue 04 (2011-09-30)
4-9
The HSUPA adaptive retransmission algorithm periodically determines whether the target retransmission times of each user need to be adjusted. To avoid fluctuation of system load, the maximum number of users are fixed for adjustment in each period. When determining whether to adjust the target retransmission times, the NodeB needs to monitor the conditions of transmit power and resources such as the Uu load resources and CE resources.
Determining whether the transmit power of the HSUPA UE is limited The UE reports the Scheduling Information (SI) to the NodeB. The SI contains the actual transmit power of the UE. Based on the SI, the NodeB estimates whether the transmit power of the UE meets the requirement of the current service rate and thus determines whether the transmit power of the UE is limited.
Determining whether the load on the Uu interface is limited Based on the monitoring result of the Uu interface load, the NodeB determines that the load on the Uu interface is limited when the load is high and that the load on the Uu interface is restored when the load is low.
Monitoring CE resources The NodeB monitors the available CE resources. If the available CE resources are sufficient, retransmission is allowed. Otherwise, retransmission is rejected.
Issue 04 (2011-09-30)
4-10
Issue 04 (2011-09-30)
4-11
For delay-sensitive real-time services such as the CS voice service and VoIP service, HSUPA functions ensure that the packet transmission delay is smaller than the acceptable delay limit. For rate-sensitive non-real-time services such as the interactive service, background service, and streaming service, HSUPA functions allocate them the service rates that are higher than or equal to the GBR. Increasing service rates can improve the user satisfaction. HSUPA QoS management ensures the connectivity of services.
HSUPA QoS guarantee meets the QoS requirements of services through related HSUPA functions. Table 5-1 lists the relations between HSUPA functions and QoS indicators. Table 5-1 Relations between HSUPA functions and QoS indicators Function Bearer mapping Mobility management Load control Fast scheduling Flow control Dynamic CE management HSUPA adaptive retransmission Service Connectivity Service Delay Service Rate
These relations between HSUPA functions and QoS indicators are described as follows:
Bearer mapping The HSUPA channel can provide a higher peak rate than the DCH and reduce data transmission delay. Thus, bearer mapping enables services that require high rate or small delay to be carried on the HSUPA channel to improve user experience. For details, see the Radio Bearers Feature Parameter Description.
Mobility management Mobility management ensures service continuity. When a UE moves to an HSUPA-capable cell, services supporting HSUPA bearer should be carried on the HSUPA channel to ensure service continuity. For details, see the Handover Feature Parameter Description.
Load control The network resources are limited. Therefore, when a large number of users attempt to access the network, the access control function is required to control the access to ensure the QoS of the admitted users.
Issue 04 (2011-09-30)
5-1
When a cell is overloaded, the overload control function is required to relieve congestion and ensure the QoS of most users. For details, see the Load Control Feature Parameter Description.
Fast scheduling Based on the achieved service rate, GBR, and MBR, the fast scheduling function tries to allocate enough resources to meet the requirement for transmission rate on the Uu interface. After a required rate is allocated, the requirement for transmission delay is also met. For details, see section 4.1 "Fast Scheduling."
Flow control By allocating appropriate Iub bandwidth to users, the flow control function reduces the transmission time on the Iub interface. For details, see the Transmission Resource Management Feature Parameter Description.
Dynamic CE management Dynamic CE management ensures efficient utilization of CE resources. In addition, dynamic CE management allocates more CE resources according to the requirements of users to ensure the QoS of users. For details, see section 4.2 "CE Resource Management."
HSUPA adaptive retransmission When a UE moves to the edge of a cell and the transmit power is not sufficient, the uplink data transmission may be interrupted. In such cases, HSUPA adaptive retransmission increases the retransmission times automatically to ensure the continuity of data transmission. When the transmit power is high enough, the retransmission times decrease automatically to increase the service rate. For details, see section 4.4 "HSUPA Adaptive Retransmission."
When the basic QoS requirements of some users cannot be met because of system resource congestion, high-priority users or services are allocated resources preferentially, thus improving the user satisfaction. When the system resources are redundant even after the basic QoS requirements of all the users are met, the redundant resources are allocated to users additionally. High-priority users or services are allocated more additional resources, thus further improving the user satisfaction.
HSUPA Diff-Serv management provides the following QoS differentiated service policies:
Differentiated services based on service types Differentiated services based on user priorities
To further quantify the effect of Diff-Serv management, differentiated services based on SPI weights are introduced. This section describes the differentiated services based on SPI weights and the differentiated service policies. For details, see Differentiated HSPA Service Feature Parameter Description.
Issue 04 (2011-09-30)
5-2
6 Parameters
6 Parameters
Table 6-1 Parameter description Parameter ID NE DLGROUPN NodeB MML ADD DLGROUP(Mandato ry) MOD DLGROUP(Mandato ry) RMV DLGROUP(Mandato ry) MOD ULGROUP(Mandato ry) ADD ULGROUP(Mandato ry) RMV ULGROUP(Mandato ry) Description Meaning: Downlink BB resource group number GUI Value Range: 0~3 Actual Value Range: 0~3 Unit: None Default Value: -
ULGROUPN
NodeB
Meaning: Uplink BB resource group number GUI Value Range: 0~3 Actual Value Range: 0~3 Unit: None Default Value: -
MaxTargetUlLo BSC6900 ADD Meaning: The parameter specifies the target value of adFactor UCELLHSUPA(Optio the uplink load, which is decreased through HSUPA nal) power control on the NodeB side. For details about MOD this parameter, refer to 3GPP TS 25.433. UCELLHSUPA(Optio nal) GUI Value Range: 0~100 Actual Value Range: 0~1, step:0.01 Unit: per cent Default Value: 75 SPIweight BSC6900 ADD Meaning: Weighting weight for service scheduling UOPERSPIWEIGHT priority. This weight is used in two algorithms. In (Mandatory) scheduling algorithm, it is used to adjust the handling priority for different services. In Iub congestion algorithm, it is used to allocate bandwidth for different services. If the weight is higher, it is more possible for the user to increase the handling priority or get more Iub bandwidth. GUI Value Range: 1~100 Actual Value Range: 1~100 Unit: per cent Default Value: None
Issue 04 (2011-09-30)
6-1
6 Parameters
Parameter ID NE SPIweight
MML
Description
BSC6900 SET Meaning: Weighting weight for service scheduling USPIWEIGHT(Mand priority. This weight is used in two algorithms. In atory) scheduling algorithm, it is used to adjust the handling priority for different services. In Iub congestion algorithm, it is used to allocate bandwidth for different services. If the weight is higher, it is more possible for the user to increase the handling priority or get more Iub bandwidth. GUI Value Range: 1~100 Actual Value Range: 1~100 Unit: per cent Default Value: None
SPI
Meaning: Scheduling priority of interactive and background services on HS-DSCH or EDCH. Value 11 indicates the highest priority, while value 2 indicates the lowest priority. Values 0, 1, 12, 13, 14, and 15 are reserved for the other services. GUI Value Range: 2~11 Actual Value Range: 2~11 Unit: None Default Value: None
SPI
Meaning: Scheduling priority of data frames on HS-DSCH or EDCH. Value 15 indicates the highest priority, while value 0 indicates the lowest priority. GUI Value Range: 0~15 Actual Value Range: 0~15 Unit: None Default Value: None
SPI
BSC6900 SET Meaning: Scheduling priority of interactive and USCHEDULEPRIO background services on HS-DSCH or EDCH. Value MAP(Mandatory) 11 indicates the highest priority, while value 2 indicates the lowest priority. Values 0, 1, 12, 13, 14, and 15 are reserved for the other services. GUI Value Range: 2~11 Actual Value Range: 2~11 Unit: None Default Value: None
SPI
BSC6900 SET Meaning: Scheduling priority of data frames on USPIWEIGHT(Mand HS-DSCH or EDCH. Value 15 indicates the highest atory) priority, while value 0 indicates the lowest priority. GUI Value Range: 0~15 Actual Value Range: 0~15 Unit: None Default Value: None
Issue 04 (2011-09-30)
6-2
6 Parameters
MML
Description
Meaning: HSUPA overload scheduling switch SET MACEPARA(Option GUI Value Range: OPEN(Open), CLOSE(Close) al) Actual Value Range: OPEN, CLOSE Unit: None Default Value: -
Issue 04 (2011-09-30)
6-3
7 Counters
7 Counters
For details, see the BSC6900 UMTS Performance Counter Reference and NodeB Performance Counter Reference.
Issue 04 (2011-09-30)
7-1
8 Glossary
8 Glossary
For the acronyms, abbreviations, terms, and definitions, see the Glossary.
Issue 04 (2011-09-30)
8-1
9 Reference Documents
9 Reference Documents
[1] 3GPP TS 25.211: "Physical channels and mapping of transport channels onto physical channels (FDD)" [2] 3GPP TS 25.306: "UE Radio Access capabilities" [3] 3GPP TS 25.321: "Medium Access Control (MAC) protocol specification" [4] 3GPP TS 25.331: "Radio Resource Control (RRC) Protocol Specification" [5] Radio Bearers Feature Parameter Description [6] Load Control Feature Parameter Description [7] Power Control Feature Parameter Description [8] Handover Feature Parameter Description [9] State Transition Feature Parameter Description [10] Transmission Resource Management Feature Parameter Description [12] BSC6900 UMTS Performance Counter Reference [13] NodeB Performance Counter Reference
Issue 04 (2011-09-30)
9-1